Welcome to Atheist Discussion, a new community created by former members of The Thinking Atheist forum.

Thread Rating:
  • 2 Vote(s) - 5 Average
  • 1
  • 2
  • 3
  • 4
  • 5
The IT Thread

The IT Thread
The one thing—a major, limiting thing—that AI and/or ML
may never incorporate is logic. As in human logic. Which
is why we need not fear robots taking over the world, as
some people seem to think.

The sentient HAL 9000 is merely a fifty-year-old fiction.
I'm a creationist;   I believe that man created God.
The following 1 user Likes SYZ's post:
  • Fireball
Reply

The IT Thread
(08-13-2024, 01:51 AM)SYZ Wrote: The one thing—a major, limiting thing—that AI and/or ML
may never incorporate is logic.  As in human logic. Which
is why we need not fear robots taking over the world, as
some people seem to think.  

The sentient HAL 9000 is merely a fifty-year-old fiction.

Or mebbie not:

Quote:Park and his colleagues initiated this study after a 2022 Science study by Meta caught their attention. The study described CICERO, an AI system created by Meta to excel in the alliance-building, world-conquest board game Diplomacy...

Their analysis showed that CICERO failed to be honest despite being trained to be truthful. It learned to misrepresent its preferences to gain an upper hand in the negotiations, the paper noted. It also built a fake alliance with a human player to trick them into leaving themselves undefended during an attack.Link
Reply

The IT Thread
(08-13-2024, 03:01 AM)Inkubus Wrote: ...
Quote:Park and his colleagues initiated this study after a 2022 Science study by Meta caught their attention. The study described CICERO, an AI system created by Meta to excel in the alliance-building, world-conquest board game Diplomacy...

Their analysis showed that CICERO failed to be honest despite being trained to be truthful. It learned to misrepresent its preferences to gain an upper hand in the negotiations, the paper noted. It also built a fake alliance with a human player to trick them into leaving themselves undefended during an attack.Link

(my bold)

That's a classic example of ML, rather than true AI...  I think?
I'm a creationist;   I believe that man created God.
Reply

The IT Thread
(08-13-2024, 03:01 AM)Inkubus Wrote:
(08-13-2024, 01:51 AM)SYZ Wrote: The one thing—a major, limiting thing—that AI and/or ML
may never incorporate is logic.  As in human logic. Which
is why we need not fear robots taking over the world, as
some people seem to think.  

The sentient HAL 9000 is merely a fifty-year-old fiction.

Or mebbie not:

Quote:Park and his colleagues initiated this study after a 2022 Science study by Meta caught their attention. The study described CICERO, an AI system created by Meta to excel in the alliance-building, world-conquest board game Diplomacy...

Their analysis showed that CICERO failed to be honest despite being trained to be truthful. It learned to misrepresent its preferences to gain an upper hand in the negotiations, the paper noted. It also built a fake alliance with a human player to trick them into leaving themselves undefended during an attack.Link
Some guy from MIT wrote a seminal paper on this topic. In it he speaks of the fact that once you define a state where an agent is forbidden from X, which takes a lot of resources, it requires almost no resources to "flip the bit" and do X anyway. The problem is that in creating the angel, you also create the demon.
Reply

The IT Thread
A bit long but I was fascinated:



Quote:We are as a matter of fact right now building creepy super capable amoral psychopaths that never sleep, think much faster than us, can make copies of themselves and have nothing human about them whatsoever. What could possibly go wrong.

Quote:AI is now deployed in the pentagon's top secret Cloud. Edward Snowden said the intersection of AI with the ocean of mass surveillance data is going to put terrible powers in the hands of an unaccountable few and they may not be able to keep hold of it. Paul Cristiano said that eventually AI systems will be able to prevent humans from turning them off but much sooner than this AI will get out of control.

Quote:Humanity's AI evolution is not out of our control not yet, there are promising areas of research. The problem is that nearly all AI research funding of hundreds of billions per year is pushing capabilities for profit, safety efforts are tiny in comparison.

Quote:Nick Bostrom and the future of Life Institute, join us in calling for international AI safety research projects. We don't know if it will be possible to maintain control of superintelligence but we can point it in the right direction, instead of rushing to create it with no moral compass and clear reasons to kill us off. There will be no warning shot. Our greatest risk is increasingly hidden even as it spreads into all our systems.

It was popularly said around these parts a bucket of water would sort it. Not any more.
Reply




Users browsing this thread: 1 Guest(s)