The Japan Times - Firms and researchers at odds over superhuman AI

EUR -
AED 4.314393
AFN 76.939193
ALL 96.39895
AMD 448.403333
ANG 2.103039
AOA 1077.124807
ARS 1689.430346
AUD 1.769643
AWG 2.117249
AZN 2.00152
BAM 1.954765
BBD 2.365048
BDT 143.504005
BGN 1.955623
BHD 0.442814
BIF 3483.916871
BMD 1.174618
BND 1.513898
BOB 8.143687
BRL 6.361611
BSD 1.174278
BTN 106.500601
BWP 15.508655
BYN 3.434081
BYR 23022.512028
BZD 2.361649
CAD 1.618582
CDF 2642.890545
CHF 0.935994
CLF 0.027368
CLP 1073.63589
CNY 8.277826
CNH 8.273762
COP 4491.77432
CRC 587.388938
CUC 1.174618
CUP 31.127376
CVE 110.651685
CZK 24.329154
DJF 208.752807
DKK 7.46998
DOP 74.412456
DZD 152.31039
EGP 55.710722
ERN 17.619269
ETB 182.764114
FJD 2.648
FKP 0.878906
GBP 0.878479
GEL 3.180687
GGP 0.878906
GHS 13.513925
GIP 0.878906
GMD 86.310048
GNF 10207.430237
GTQ 8.995236
GYD 245.671992
HKD 9.141259
HNL 30.93062
HRK 7.532001
HTG 153.858522
HUF 384.26099
IDR 19576.182932
ILS 3.773871
IMP 0.878906
INR 106.563514
IQD 1538.285374
IRR 49463.162696
ISK 148.201747
JEP 0.878906
JMD 187.660621
JOD 0.832783
JPY 182.410538
KES 151.42007
KGS 102.720408
KHR 4703.169944
KMF 493.339674
KPW 1057.155797
KRW 1725.9952
KWD 0.36042
KYD 0.978573
KZT 605.659263
LAK 25445.524879
LBP 105155.513068
LKR 363.087721
LRD 207.260242
LSL 19.701966
LTL 3.468342
LVL 0.710515
LYD 6.365629
MAD 10.778492
MDL 19.821335
MGA 5234.228123
MKD 61.541226
MMK 2465.835411
MNT 4165.037041
MOP 9.413295
MRU 46.711263
MUR 53.973669
MVR 18.089955
MWK 2036.221683
MXN 21.133222
MYR 4.807126
MZN 75.051531
NAD 19.701966
NGN 1705.932508
NIO 43.217114
NOK 11.934183
NPR 170.400761
NZD 2.029041
OMR 0.451648
PAB 1.174278
PEN 3.954306
PGK 4.990357
PHP 69.126548
PKR 329.087926
PLN 4.216238
PYG 7886.823395
QAR 4.279734
RON 5.091612
RSD 117.371285
RUB 93.383315
RWF 1709.709149
SAR 4.40741
SBD 9.604559
SCR 16.481849
SDG 706.530872
SEK 10.91862
SGD 1.515305
SHP 0.881268
SLE 28.337634
SLL 24631.155629
SOS 669.945219
SRD 45.351848
STD 24312.220241
STN 24.487032
SVC 10.274559
SYP 12987.377059
SZL 19.705565
THB 37.013971
TJS 10.797474
TMT 4.122909
TND 3.434181
TOP 2.828199
TRY 50.158656
TTD 7.969779
TWD 36.804069
TZS 2915.992834
UAH 49.634415
UGX 4182.784933
USD 1.174618
UYU 46.015632
UZS 14206.476713
VES 314.139533
VND 30915.944723
VUV 142.278694
WST 3.260132
XAF 655.60981
XAG 0.018504
XAU 0.000273
XCD 3.174464
XCG 2.116279
XDR 0.816821
XOF 655.60981
XPF 119.331742
YER 280.135575
ZAR 19.731984
ZMK 10572.956485
ZMW 27.213589
ZWL 378.226504
  • SCS

    0.0200

    16.14

    +0.12%

  • RBGPF

    -3.4900

    77.68

    -4.49%

  • RYCEF

    0.3000

    14.9

    +2.01%

  • CMSC

    -0.0150

    23.285

    -0.06%

  • CMSD

    0.0800

    23.33

    +0.34%

  • NGG

    0.7800

    75.71

    +1.03%

  • GSK

    0.3650

    49.175

    +0.74%

  • AZN

    1.2400

    91.07

    +1.36%

  • RIO

    -0.1250

    75.535

    -0.17%

  • BCC

    -0.9250

    75.585

    -1.22%

  • BCE

    0.3361

    23.73

    +1.42%

  • RELX

    0.5850

    40.965

    +1.43%

  • BTI

    0.3100

    57.41

    +0.54%

  • JRI

    -0.0015

    13.565

    -0.01%

  • BP

    -0.1700

    35.09

    -0.48%

  • VOD

    0.1360

    12.726

    +1.07%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Kimura--JT