The Japan Times - Firms and researchers at odds over superhuman AI

EUR -
AED 4.381992
AFN 78.750894
ALL 96.772834
AMD 453.127673
ANG 2.135904
AOA 1094.155023
ARS 1723.006224
AUD 1.703048
AWG 2.147741
AZN 2.027312
BAM 1.958039
BBD 2.409237
BDT 146.15714
BGN 2.003807
BHD 0.449939
BIF 3543.827792
BMD 1.193189
BND 1.513334
BOB 8.264659
BRL 6.197065
BSD 1.196143
BTN 110.049154
BWP 15.598819
BYN 3.379033
BYR 23386.513916
BZD 2.405733
CAD 1.613288
CDF 2693.62495
CHF 0.916376
CLF 0.025958
CLP 1024.95004
CNY 8.290757
CNH 8.289248
COP 4358.721191
CRC 591.863639
CUC 1.193189
CUP 31.619521
CVE 110.393555
CZK 24.34441
DJF 213.004295
DKK 7.467153
DOP 75.15697
DZD 154.308073
EGP 56.001272
ERN 17.897842
ETB 185.122907
FJD 2.620781
FKP 0.864978
GBP 0.867162
GEL 3.215635
GGP 0.864978
GHS 13.067272
GIP 0.864978
GMD 87.697079
GNF 10497.500171
GTQ 9.177688
GYD 250.242459
HKD 9.315768
HNL 31.595737
HRK 7.533438
HTG 156.800337
HUF 381.275947
IDR 20028.222449
ILS 3.690338
IMP 0.864978
INR 109.703873
IQD 1563.674821
IRR 50263.107265
ISK 144.99605
JEP 0.864978
JMD 187.688003
JOD 0.845975
JPY 183.732053
KES 154.243589
KGS 104.344067
KHR 4800.801608
KMF 491.594467
KPW 1073.96939
KRW 1718.932363
KWD 0.365955
KYD 0.996727
KZT 600.839544
LAK 25677.437566
LBP 107117.524012
LKR 370.074058
LRD 221.3444
LSL 18.780413
LTL 3.523179
LVL 0.721749
LYD 7.487269
MAD 10.834074
MDL 20.11961
MGA 5321.625216
MKD 61.62671
MMK 2505.752956
MNT 4256.95142
MOP 9.615976
MRU 47.572579
MUR 54.20683
MVR 18.434798
MWK 2072.570214
MXN 20.625111
MYR 4.698727
MZN 76.065949
NAD 18.864464
NGN 1658.366152
NIO 43.187477
NOK 11.432366
NPR 176.101211
NZD 1.969586
OMR 0.458787
PAB 1.196098
PEN 3.989425
PGK 5.083586
PHP 70.333154
PKR 333.88428
PLN 4.210294
PYG 8026.784566
QAR 4.344522
RON 5.097187
RSD 117.389486
RUB 90.086234
RWF 1733.107728
SAR 4.475517
SBD 9.614842
SCR 16.593195
SDG 717.661496
SEK 10.535953
SGD 1.512051
SHP 0.895201
SLE 29.08404
SLL 25020.586042
SOS 681.867426
SRD 45.34538
STD 24696.61331
STN 24.609533
SVC 10.465837
SYP 13196.168479
SZL 18.855865
THB 37.48407
TJS 11.171609
TMT 4.188095
TND 3.373445
TOP 2.872914
TRY 51.903862
TTD 8.118318
TWD 37.534758
TZS 3072.463155
UAH 51.192889
UGX 4254.972804
USD 1.193189
UYU 45.262709
UZS 14550.945781
VES 437.717685
VND 30924.48849
VUV 142.715687
WST 3.23879
XAF 656.694211
XAG 0.011511
XAU 0.000235
XCD 3.224654
XCG 2.155638
XDR 0.816792
XOF 653.27021
XPF 119.331742
YER 284.461217
ZAR 19.03704
ZMK 10740.145808
ZMW 23.653834
ZWL 384.206528
  • SCS

    0.0200

    16.14

    +0.12%

  • RBGPF

    1.3800

    83.78

    +1.65%

  • CMSC

    0.0100

    23.71

    +0.04%

  • NGG

    0.3900

    85.07

    +0.46%

  • RYCEF

    -0.0700

    16.88

    -0.41%

  • RELX

    -1.2100

    36.17

    -3.35%

  • GSK

    0.5600

    50.66

    +1.11%

  • RIO

    1.7600

    95.13

    +1.85%

  • BCE

    0.2200

    25.49

    +0.86%

  • CMSD

    0.0392

    24.09

    +0.16%

  • BCC

    -0.5500

    80.3

    -0.68%

  • AZN

    -0.6300

    92.59

    -0.68%

  • JRI

    -0.0500

    12.94

    -0.39%

  • VOD

    0.1400

    14.71

    +0.95%

  • BTI

    0.0600

    60.22

    +0.1%

  • BP

    0.3400

    38.04

    +0.89%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Kimura--JT