The Japan Times - Gemini's flawed AI racial images seen as warning of tech titans' power

EUR -
AED 4.309185
AFN 77.664833
ALL 96.578153
AMD 447.171387
ANG 2.100795
AOA 1075.974916
ARS 1700.476811
AUD 1.767714
AWG 2.11499
AZN 1.993018
BAM 1.957417
BBD 2.36071
BDT 143.349055
BGN 1.95623
BHD 0.4424
BIF 3465.69311
BMD 1.173365
BND 1.515258
BOB 8.099727
BRL 6.513937
BSD 1.172048
BTN 105.019984
BWP 16.486341
BYN 3.444788
BYR 22997.944348
BZD 2.357308
CAD 1.616486
CDF 3002.053142
CHF 0.931885
CLF 0.027239
CLP 1068.571028
CNY 8.261601
CNH 8.251715
COP 4494.45541
CRC 585.383681
CUC 1.173365
CUP 31.094159
CVE 110.356654
CZK 24.322262
DJF 208.718899
DKK 7.469058
DOP 73.420665
DZD 152.282774
EGP 55.701142
ERN 17.600468
ETB 182.087276
FJD 2.683896
FKP 0.880157
GBP 0.874526
GEL 3.150516
GGP 0.880157
GHS 13.462181
GIP 0.880157
GMD 85.655547
GNF 10245.552838
GTQ 8.981459
GYD 245.223664
HKD 9.127767
HNL 30.878119
HRK 7.532879
HTG 153.677633
HUF 386.567869
IDR 19695.509941
ILS 3.76599
IMP 0.880157
INR 105.136335
IQD 1535.468701
IRR 49398.645621
ISK 147.210343
JEP 0.880157
JMD 187.544961
JOD 0.831933
JPY 184.814279
KES 151.376059
KGS 102.610622
KHR 4703.906708
KMF 492.81343
KPW 1056.02802
KRW 1736.943149
KWD 0.360833
KYD 0.976807
KZT 606.561179
LAK 25385.542435
LBP 104960.335779
LKR 362.89366
LRD 207.457879
LSL 19.662411
LTL 3.464641
LVL 0.709756
LYD 6.353141
MAD 10.743823
MDL 19.843057
MGA 5330.313385
MKD 61.60011
MMK 2464.431858
MNT 4166.879392
MOP 9.394362
MRU 46.907758
MUR 54.17501
MVR 18.128533
MWK 2032.444691
MXN 21.122085
MYR 4.783227
MZN 74.995458
NAD 19.662747
NGN 1711.915715
NIO 43.136009
NOK 11.894511
NPR 168.034124
NZD 2.029398
OMR 0.45116
PAB 1.172073
PEN 3.947178
PGK 4.986162
PHP 68.993251
PKR 328.389238
PLN 4.205643
PYG 7863.363174
QAR 4.273149
RON 5.086416
RSD 117.383056
RUB 93.018839
RWF 1706.580996
SAR 4.401058
SBD 9.559106
SCR 16.336993
SDG 705.789525
SEK 10.866224
SGD 1.514473
SHP 0.880327
SLE 28.219844
SLL 24604.87134
SOS 668.652483
SRD 45.105889
STD 24286.276292
STN 24.520365
SVC 10.255474
SYP 12975.512305
SZL 19.659909
THB 36.586091
TJS 10.800924
TMT 4.106776
TND 3.430849
TOP 2.825181
TRY 50.228508
TTD 7.955573
TWD 36.975015
TZS 2914.028456
UAH 49.558404
UGX 4192.481957
USD 1.173365
UYU 46.018219
UZS 14090.462297
VES 331.076119
VND 30899.967624
VUV 141.511723
WST 3.271124
XAF 656.488242
XAG 0.017038
XAU 0.000266
XCD 3.171076
XCG 2.112445
XDR 0.816461
XOF 656.488242
XPF 119.331742
YER 279.730202
ZAR 19.609678
ZMK 10561.685231
ZMW 26.518459
ZWL 377.822893
  • SCS

    0.0200

    16.14

    +0.12%

  • RYCEF

    -0.4100

    15.2

    -2.7%

  • NGG

    -0.0600

    76.05

    -0.08%

  • AZN

    0.1000

    91.46

    +0.11%

  • RIO

    1.4760

    79.796

    +1.85%

  • GSK

    0.0200

    48.63

    +0.04%

  • BTI

    0.4250

    56.875

    +0.75%

  • BCE

    -0.1260

    22.714

    -0.55%

  • VOD

    0.0800

    12.92

    +0.62%

  • CMSD

    0.0000

    23.25

    0%

  • RELX

    0.3650

    41.095

    +0.89%

  • BCC

    0.3400

    75.11

    +0.45%

  • JRI

    0.0000

    13.38

    0%

  • BP

    0.6000

    34.54

    +1.74%

  • RBGPF

    0.7800

    81

    +0.96%

  • CMSC

    0.1500

    23.32

    +0.64%

Gemini's flawed AI racial images seen as warning of tech titans' power
Gemini's flawed AI racial images seen as warning of tech titans' power / Photo: PAU BARRENA - AFP

Gemini's flawed AI racial images seen as warning of tech titans' power

For people at the trend-setting tech festival here, the scandal that erupted after Google's Gemini chatbot cranked out images of Black and Asian Nazi soldiers was seen as a warning about the power artificial intelligence can give tech titans.

Text size:

Google CEO Sundar Pichai last month slammed as "completely unacceptable" errors by his company's Gemini AI app, after gaffes such as the images of ethnically diverse Nazi troops forced it to temporarily stop users from creating pictures of people.

Social media users mocked and criticized Google for the historically inaccurate images, like those showing a female black US senator from the 1800s -- when the first such senator was not elected until 1992.

"We definitely messed up on the image generation," Google co-founder Sergey Brin said at a recent AI "hackathon," adding that the company should have tested Gemini more thoroughly.

Folks interviewed at the popular South by Southwest arts and tech festival in Austin said the Gemini stumble highlights the inordinate power a handful of companies have over the artificial intelligence platforms that are poised to change the way people live and work.

"Essentially, it was too 'woke,'" said Joshua Weaver, a lawyer and tech entrepreneur, meaning Google had gone overboard in its effort to project inclusion and diversity.

Google quickly corrected its errors, but the underlying problem remains, said Charlie Burgoyne, chief executive of the Valkyrie applied science lab in Texas.

He equated Google's fix of Gemini to putting a Band-Aid on a bullet wound.

While Google long had the luxury of having time to refine its products, it is now scrambling in an AI race with Microsoft, OpenAI, Anthropic and others, Weaver noted, adding, "They are moving faster than they know how to move."

Mistakes made in an effort at cultural sensitivity are flashpoints, particularly given the tense political divisions in the United States, a situation exacerbated by Elon Musk's X platform, the former Twitter.

"People on Twitter are very gleeful to celebrate any embarrassing thing that happens in tech," Weaver said, adding that reaction to the Nazi gaffe was "overblown."

The mishap did, however, call into question the degree of control those using AI tools have over information, he maintained.

In the coming decade, the amount of information -- or misinformation -- created by AI could dwarf that generated by people, meaning those controlling AI safeguards will have huge influence on the world, Weaver said.

- Bias-in, Bias-out -

Karen Palmer, an award-winning mixed-reality creator with Interactive Films Ltd., said she could imagine a future in which someone gets into a robo-taxi and, "if the AI scans you and thinks that there are any outstanding violations against you... you'll be taken into the local police station," not your intended destination.

AI is trained on mountains of data and can be put to work on a growing range of tasks, from image or audio generation to determining who gets a loan or whether a medical scan detects cancer.

But that data comes from a world rife with cultural bias, disinformation and social inequity -- not to mention online content that can include casual chats between friends or intentionally exaggerated and provocative posts -- and AI models can echo those flaws.

With Gemini, Google engineers tried to rebalance the algorithms to provide results better reflecting human diversity.

The effort backfired.

"It can really be tricky, nuanced and subtle to figure out where bias is and how it's included," said technology lawyer Alex Shahrestani, a managing partner at Promise Legal law firm for tech companies.

Even well-intentioned engineers involved with training AI can't help but bring their own life experience and subconscious bias to the process, he and others believe.

Valkyrie's Burgoyne also castigated big tech for keeping the inner workings of generative AI hidden in "black boxes," so users are unable to detect any hidden biases.

"The capabilities of the outputs have far exceeded our understanding of the methodology," he said.

Experts and activists are calling for more diversity in teams creating AI and related tools, and greater transparency as to how they work -- particularly when algorithms rewrite users' requests to "improve" results.

A challenge is how to appropriately build in perspectives of the world's many and diverse communities, Jason Lewis of the Indigenous Futures Resource Center and related groups said here.

At Indigenous AI, Jason works with farflung indigenous communities to design algorithms that use their data ethically while reflecting their perspectives on the world, something he does not always see in the "arrogance" of big tech leaders.

His own work, he told a group, stands in "such a contrast from Silicon Valley rhetoric, where there's a top-down 'Oh, we're doing this because we're going to benefit all humanity' bullshit, right?"

His audience laughed.

S.Ogawa--JT