The Japan Times - Firms and researchers at odds over superhuman AI

EUR -
AED 4.24008
AFN 72.724514
ALL 96.508212
AMD 435.724665
ANG 2.066402
AOA 1058.549174
ARS 1611.776544
AUD 1.622763
AWG 2.07785
AZN 1.960194
BAM 1.960182
BBD 2.322973
BDT 141.516394
BGN 1.973159
BHD 0.435859
BIF 3429.606086
BMD 1.154361
BND 1.473795
BOB 7.970061
BRL 5.979824
BSD 1.153369
BTN 106.512363
BWP 15.674587
BYN 3.459434
BYR 22625.472664
BZD 2.319656
CAD 1.580741
CDF 2614.627194
CHF 0.905599
CLF 0.02653
CLP 1047.652011
CNY 7.94991
CNH 7.94404
COP 4269.692195
CRC 540.627436
CUC 1.154361
CUP 30.590563
CVE 112.146595
CZK 24.429622
DJF 205.153016
DKK 7.472137
DOP 70.358441
DZD 152.479986
EGP 60.311659
ERN 17.315413
ETB 181.6675
FJD 2.547792
FKP 0.867882
GBP 0.863953
GEL 3.139771
GGP 0.867882
GHS 12.565224
GIP 0.867882
GMD 84.83615
GNF 10135.288544
GTQ 8.834752
GYD 241.306816
HKD 9.046783
HNL 30.67094
HRK 7.536837
HTG 151.288898
HUF 388.410086
IDR 19588.349267
ILS 3.577884
IMP 0.867882
INR 106.666809
IQD 1512.212714
IRR 1516830.157279
ISK 143.59058
JEP 0.867882
JMD 181.435643
JOD 0.818461
JPY 183.486813
KES 149.548017
KGS 100.949257
KHR 4628.986439
KMF 492.91224
KPW 1038.975448
KRW 1713.590561
KWD 0.35402
KYD 0.961182
KZT 555.751774
LAK 24789.899418
LBP 103373.014559
LKR 359.166113
LRD 211.823654
LSL 19.26605
LTL 3.408527
LVL 0.698261
LYD 7.385146
MAD 10.845186
MDL 20.120682
MGA 4796.368931
MKD 61.715884
MMK 2424.334665
MNT 4126.260076
MOP 9.309756
MRU 46.295668
MUR 53.839473
MVR 17.834634
MWK 2003.970748
MXN 20.387028
MYR 4.530836
MZN 73.758321
NAD 19.266689
NGN 1566.110086
NIO 42.388525
NOK 11.057172
NPR 170.421662
NZD 1.967464
OMR 0.443817
PAB 1.153414
PEN 3.957729
PGK 4.966642
PHP 68.797607
PKR 322.384125
PLN 4.259188
PYG 7476.71599
QAR 4.205625
RON 5.092578
RSD 117.444885
RUB 95.089628
RWF 1684.21248
SAR 4.334119
SBD 9.294521
SCR 17.340571
SDG 693.770822
SEK 10.702431
SGD 1.472937
SHP 0.86607
SLE 28.396756
SLL 24206.382345
SOS 659.717532
SRD 43.432838
STD 23892.938954
STN 24.934194
SVC 10.091562
SYP 127.990792
SZL 19.266786
THB 37.228589
TJS 11.055152
TMT 4.051807
TND 3.385164
TOP 2.779423
TRY 51.000472
TTD 7.825462
TWD 36.765236
TZS 3018.653819
UAH 50.674456
UGX 4353.696015
USD 1.154361
UYU 46.884822
UZS 13973.538209
VES 516.932208
VND 30359.69036
VUV 138.04672
WST 3.179352
XAF 657.452522
XAG 0.014506
XAU 0.000231
XCD 3.119718
XCG 2.07872
XDR 0.819389
XOF 664.332234
XPF 119.331742
YER 275.373143
ZAR 19.214417
ZMK 10390.613359
ZMW 22.496979
ZWL 371.703723
  • BCC

    1.3100

    73.03

    +1.79%

  • CMSC

    -0.0500

    22.94

    -0.22%

  • NGG

    -0.1400

    90.75

    -0.15%

  • BCE

    0.1750

    26.075

    +0.67%

  • CMSD

    0.0200

    22.97

    +0.09%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • JRI

    -0.0400

    12.5

    -0.32%

  • RIO

    0.0200

    89.88

    +0.02%

  • AZN

    -0.8250

    191.185

    -0.43%

  • RYCEF

    0.3800

    16.5

    +2.3%

  • GSK

    -0.2500

    53.52

    -0.47%

  • BTI

    -0.2600

    60.68

    -0.43%

  • VOD

    0.1550

    14.755

    +1.05%

  • RELX

    -0.1150

    34.355

    -0.33%

  • BP

    0.9850

    43.885

    +2.24%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: Joe Klamar - AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

Y.Kimura--JT