AI ALIGNMENT FORUM
AF

Wikitags

Agent

Written by Kaj Sotala, Alex_Altair, Ruben Bloom, et al. last updated 20th Sep 2022

A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in , , , and artificial intelligence.

Editor note: there is work to be done reconciling this page, Agency page, and Robust Agents. Currently they overlap and I'm not sure they're consistent. - Ruby, 2020-09-15

More generally, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.[1]

There has been much discussion as to whether certain AGI designs can be made into or whether they will necessarily be agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are potentially dangerous, due to considerations such as possibly causing behavior which is in conflict with humanity's values.

In Dreams of Friendliness and in Reply to Holden on Tool AI, argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they are necessarily agents.

See also

Posts

  • The Power of Agency
  1. ^

    Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.

economics
game theory
mere tools
Tool AI
Agency
Robust Agents
Eliezer Yudkowsky
decision theory
Discussion0
Discussion0
Oracle AI
basic AI drives