真正的人工智能能实现吗
Elon Musk famously warned that “with artificial intelligence (AI), we are summoning the demon.” He feared that AI might suddenly spin out of control and put society at risk. Ironically, AI must succeed if Musk is to deliver on his promise of self-driving autonomous cars that can navigate through bad weather, poorly marked roads, and novel emergency situations.
埃隆·马斯克(Elon Musk)著名地警告说:“借助人工智能(AI),我们正在召唤恶魔。” 他担心AI可能会突然失控并给社会带来风险。 具有讽刺意味的是,如果Musk兑现其自动驾驶自动驾驶汽车的承诺,那么人工智能必须成功,自动驾驶汽车可以在恶劣的天气,路况不佳的道路和新颖的紧急情况下行驶。
In the past two decades, AI has made great strides. Computers can now handily beat human chess masters, as well as world champions at the ancient game of Go. Two AI methods in particular — reinforcement learning (RL) and agent-based modeling (ABM) — have proven especially promising. RL is an approach by which an agent (e.g., a game player) tries to maximize its total reward for positive steps toward a goal, such as winning a game. The agent interacts with the environment (pieces on the game board) by taking actions (moving the pieces), after which the new state of the environment is assessed.
在过去的二十年中,人工智能取得了长足的进步。 现在,计算机可以轻松击败人类的国际象棋大师,以及古老的围棋比赛的世界冠军。 事实证明,特别是两种AI方法-强化学习(RL)和基于代理的建模(ABM)-很有前途。 RL是一种方法,通过这种方法,代理(例如,游戏玩家)会设法最大化其总奖励,以实现朝着目标迈出的积极步伐,例如赢得一场比赛。 代理通过采取行动(移动棋子)与环境(棋盘上的棋子)进行交互,然后评估新的环境状态。
To succeed, RL must make simplifying assumptions. The agent performs best when the environment (game board) is fully observable, which is not realistic in the real world. RL also assumes that the state of the environment and actions taken by the agent are independent of all previous states and actions, the so-called Markov rule. That seems fine for game environments, but, again, unrealistic when navigating a complex and uncertain real world.
为了取得成功,RL必须做出简化的假设。 当可以完全观察到环境(游戏板)时,该代理的性能最佳,这在现实世界中是不现实的。 RL还假设环境的状态和代理所采取的措施独立于所有先前的状态和措施,即所谓的马尔可夫规则。 对于游戏环境来说,这似乎很好,但是在复杂而不确定的现实世界中浏览时,这又是不现实的。
While RL-powered AI can defeat humans at chess and Go, it struggles in situations when more than a handful of agents are involved in a task. For that reason, self-driving cars can’t yet navigate any but the most highly structured of environments, such as the well-marked highways of sunny Phoenix, Arizona.
尽管以RL为动力的AI可以在国际象棋和围棋中击败人类,但在有少数特工参与任务的情况下,它仍然很挣扎。 因此,除了高度结构化的环境外,自动驾驶汽车无法导航,例如亚利桑那州阳光明媚的凤凰路标清晰的高速公路。
Current AI lacks common sense, as NYU psychologist Gary Marcus is fond of pointing out. But it’s not for lack of trying. Computer scientist Douglas Lenat spent 35 years trying to formalize the rules of common sense, only to deliver a brittle and complex curiosity called CYC that has largely been ignored.
纽约大学心理学家加里·马库斯(Gary Marcus)喜欢指出,当前的人工智能缺乏常识。 但这不是因为缺乏尝试。 计算机科学家道格拉斯·莱纳特(Douglas Lenat)花了35年的时间来试图规范常识规则,只是为了传达一种被称为CYC的脆弱而复杂的好奇心,而这种好奇心在很大程度上被忽略了。
True AI — also known as Artificial General Intelligence (AGI) — must be able to navigate the world and carry on convincing conversations, a requirement known as the Turing test. Conversations require shared assumptions, memories, interests and values between the participants. As we speak, we develop a theory of mind (TOM) of what the other person knows and thinks. Conversations have goals and agendas: to persuade, to attract, to befriend, to be understood, to compliment, to gain approval, to criticize, and sometimes to manipulate the other’s feelings and emotions.
真正的AI(也称为人工智能(AGI))必须能够导航世界并进行令人信服的对话,这就是所谓的图灵测试。 对话需要参与者之间共享假设,记忆,兴趣和价值观。 当我们说话时,我们会发展一个关于他人所知道和所想的思想理论(TOM)。 对话具有目标和议程:说服,吸引,结识,理解,称赞,获得认可,批评,有时还操纵他人的感情和情感。
In the movie Ex Machina, a robot named Ava manipulated the emotions of a human named Caleb in order to escape her captivity. That’s the essence of intelligence. Through deception, she crafted her conversation to achieve maximum effect, with the goal of freedom in mind.
在电影《 前机械人》中,一个名叫Ava的机器人操纵了一个名叫Caleb的人的情绪,以逃脱她的囚禁。 这就是智慧的本质。 通过欺骗,她精心设计了对话,以实现最大的效果,并牢记自由的目标。
Humans are one of the best examples we have of true intelligence. On the journey to true AI, it makes sense to emulate human thinking first. What, then, happens in the human mind to make us intelligent?
人类是我们拥有真正智慧的最好例子之一。 在走向真正的AI的过程中,首先模仿人类的思维是有意义的。 那么,在人类的思想中发生了什么使我们变得聪明呢?
In the 18th century, Bishop George Berkeley proposed that everything we think about must first be represented in the mind as an idea. I think that’s basically right. (Berkeley also said that no object actually exists in reality without first being perceived, which is, admittedly, rather silly. I’ll spare you Kant’s reply.)
在18世纪,主教乔治·伯克利(George Berkeley)提出,我们考虑的所有事物必须首先在思想中体现为一种思想。 我认为基本上是对的。 (伯克利还说,在没有被首先感知的情况下,实际上没有对象存在于现实中,这是很愚蠢的。我将不肯康德的答复。)
So what are these ideas? Psychologists Susan Carey and Anna Wierzbicka have written extensively on the rules and concepts that come to us naturally, as part of our common sense. For example, we have a natural understanding of space, time, causality, universal grammars and semantic primitives, and an intuitive grasp of the intentions of other people, including bad actors.
那么这些想法是什么? 心理学家苏珊·凯里(Susan Carey)和安娜·维兹比卡(Anna Wierzbicka)广泛写了自然而然的规则和概念,这是我们常识的一部分。 例如,我们对空间,时间,因果关系,通用语法和语义原语有自然的了解,并且对包括坏演员在内的其他人的意图有直观的了解。
Our senses are special input analyzers that convert the environment into ideas, or mental representations (MRs), for the mind to consider. The mind doesn’t experience the environment directly. We can’t ponder reality in its raw form. We think about MRs.
我们的感官是特殊的输入分析器,可将环境转换为思想以供考虑的思想或心理表征(MR)。 头脑不会直接体验环境。 我们不能以原始形式思考现实。 我们考虑MR。
In the mind, MRs take on a life of their own. They exist independently from, yet highly synchronized with — via the senses — their corresponding objects in reality. The mind is a parallel universe, a grand simulation of reality, if you will. (Psychologist Lawrence Barsalou calls it “understanding as simulation.”)
在头脑中,MR拥有自己的生活。 它们独立存在,但通过感官与现实中的相应对象高度同步。 头脑是一个平行的宇宙,如果可以的话,是对现实的宏伟模拟。 (心理学家Lawrence Barsalou称其为“理解为模拟”。)
Every object you see, every person you meet, every plan you make, is represented by a full-time, always-running, agent-based MR in the mind. One can imagine chatty neurons in the brain passing partial MR messages back and forth, making short-lived local copies that expire and are constantly refreshed (unless committed to long-term memory). In this sense, neurons are simply nodes in a vast compute network.
您所看到的每个对象,您遇到的每个人,您制定的每个计划,都由头脑中始终存在,始终在运行的基于代理的MR代表。 可以想象大脑中的健谈神经元来回传递部分MR信息,从而使短暂的局部副本失效并不断刷新(除非致力于长期记忆)。 从这个意义上说,神经元只是庞大的计算网络中的节点。
How might MRs work? Imagine you’re driving in your car and you stop at a traffic light. Then the light turns green. You notice someone near the crosswalk, pushing a baby carriage and talking on a cellphone, not paying much attention.
MR如何工作? 想象一下,您正在开车时停在交通信号灯旁。 然后指示灯变成绿色。 您会注意到有人行横道附近有人推婴儿车并用手机通话,没有引起太多注意。
To represent this person, an MR called object-39 (or some other unique name) is spawned in your mind. (This is speculation, of course, as no one really knows how the mind works.)
为了代表此人,您脑中产生了一个称为object-39(或其他唯一名称)的MR。 (当然,这是推测,因为没有人真正知道头脑是如何工作的。)
[object-39 organic: true what-39 composition-86 whyhow-45 when-62 where-22 howmany-98 attention-65 source-26 expiration: < 3 seconds from now>]
[object-39 Organic:真实的What-39 Composition-86 Whyhow-45 when-62 where-22 howmany-98注意-65 Source-26到期时间:<从现在开始的3秒>]
Each attribute of an MR is itself an MR, with a unique name and an arbitrarily complex hierarchy of detail behind it. Above, the MR (object-39) representing the person has attributes like space (where-22), time (when-62) and causality (whyhow-45). It’s composed (composition-86) of sub-objects (baby carriage, cell phone). The source (source-26) describes “how you learned about” object-39, either directly via your senses, or through gossip or conversation (“Sue told me about the person near the crosswalk”) along with your level of confidence in the source. Other attributes may include the assumed focus of object-39’s attention.
MR的每个属性本身就是MR,其背后具有唯一的名称和任意复杂的详细信息层次结构。 在上方,代表人的MR(对象39)具有诸如空间(where-22),时间(when-62)和因果关系(whyhow-45)之类的属性。 它由子对象(婴儿车,手机)组成(组成86)。 来源(来源26)直接通过您的感官,八卦或谈话(“苏告诉我关于人行横道的人”)描述了“您如何了解”对象39,以及您对对象39的信心水平。资源。 其他属性可能包括假定对象39的注意力集中。
[where-22 nearby][howmany-98 one][composition-86 object-98 object-21][source-26 object-76 confidence: 43%][attention object-98]
[附近的where-22] [howmany-98一个] [composition-86 object-98 object-21] [source-26 object-76置信度:43%] [注意object-98]
Named MRs allow us to refer to imaginary objects or plans that don’t yet exist, default “late binding” placeholders to be confirmed later (e.g. “I wonder if it’s a boy or girl, assuming there is a baby in the carriage” or “What’s my plan to avoid trouble?”).
具名的MR允许我们引用尚不存在的虚构对象或计划,默认的“后期绑定”占位符待稍后确认(例如“我想知道是男孩还是女孩,假设马车里有婴儿”)或“我打算如何避免麻烦?”)。
We make constant predictions about MRs using mental rule agents, or RAs (my term). RAs are autonomous and single-minded (so to speak) algorithms in the mind. Does the person at the crosswalk (object-39) notice that we have a green light? Will they ignore the “do not walk” sign and cross the road anyway? Are they distracted by their cellphone? What are their intentions? I’d feel guilty if I hurt someone, even if it was their fault.
我们会使用心理规则代理或RA(我的术语)对MR做出持续的预测。 RA在头脑中是自治且专一的算法(可以说)。 人行横道上的人(对象39)是否注意到我们有绿灯? 他们会否忽略“请勿行走”的标志而横穿马路? 他们会被手机分心吗? 他们的意图是什么? 如果我伤害了某人,即使是他们的错,我也会感到内。
An RA might look something like this:
RA可能看起来像这样:
{ruleagent-31 for each <MR in context> unless <an inhibitory rule> do: <some action>}
{ruleagent-31对于每个<MR上下文>,除非<禁止规则>这样做:<某些动作>}
An RA searches for a matching context (via MRs) in the environment. Once triggered, it may take an action, spawn a new MR, create a dependency, transfer an MR to long-term memory, or establish a new plan or goal. RAs make constant predictions about the future state of the environment, and they learn from, and sometimes regret, their prediction errors.
RA在环境中搜索匹配的上下文(通过MR)。 一旦触发,它可能会采取措施,产生新的MR,创建依赖关系,将MR转移到长期记忆或建立新的计划或目标。 RA会对环境的未来状态做出持续的预测,他们会从其预测错误中学习,有时会后悔。
Sense perceptions are packaged in such a way (as MRs) that RAs can act on them. But MRs expire quickly when no RAs apply, to avoid needlessly filling the mind’s cache with outdated facts from the senses. We only remember things that resonate with our MRs and RAs.
感觉感知以某种方式(如MR)打包在一起,以使RA可以对它们起作用。 但是,在没有RA的情况下,MR会很快过期,以避免不必要地用感觉中过时的事实填充大脑的缓存。 我们只记得与我们的MR和RA共鸣的事物。
Resonance is feeling. True AI requires feeling and emotion. Feeling happens when thousands of RAs interact with millions of MRs, many times per second, in a great, resonating simulation of reality — i.e., the mind, the parallel universe — where independent, autonomous RAs compete (and cooperate) to optimize their own agendas and carry out their plans.
共鸣就是感觉。 真正的人工智能需要感觉和情感。 当成千上万的RA与每秒数百万的MR互动数百万次时,在对现实(即思维,平行宇宙)的巨大,共鸣的模拟中,独立自主的RA竞争(并合作)优化自己的议程时,就会发生这种感觉并执行他们的计划。
We’re each born with a fixed and finite set of feelings (emotions, drives, desires, motivations, passions, fears, obsessions) that include: desire to please, hunger for food, desire for sex, fear of humiliation, greed for status and wealth, ambition for power, anxiety in social settings, desire to punish, and fear of death, among others. Feelings are our primal goals.
我们每个人出生时都有一套固定的有限的感觉(情绪,动力,欲望,动机,激情,恐惧,成见),包括:取悦欲望,对食物的渴望,对性的渴望,对屈辱的恐惧,对地位的贪婪以及财富,对权力的抱负,社交环境中的焦虑,惩罚的愿望和对死亡的恐惧等。 感觉是我们的首要目标。
A teenager may experience an epiphany once she recognizes the object of her feelings (RAs). When she sees a crowd of people staring admiringly at a charismatic politician, she may suddenly realize that she wants to experience those same adoring eyes focused on herself. Here’s the RA for such ambition:
青少年一旦认识到自己的感受对象(RA),便可能会经历顿悟。 当她看到一群人羡慕地凝视着一位有魅力的政治家时,她可能会突然意识到,她想体验那些同样专注于自己的崇拜之眼。 这是这种抱负的RA:
{ruleagent-93 ambition-level = random(100) if exists crowd AND attention-on self-65 unless ambition-level < 80 do: <spawn a plan to get more attention on self-65>}
{ruleagent-93理想水平=随机(100),如果存在人群并且关注自我65,除非理想水平<80做:<产生一个计划,以提高对自我65的关注>}
Certainly, we’re not ambitious all the time. Some people (by chance) have a higher ambition level or risk-taking threshold than others. Some people are generally more modest. Your parents can’t teach you to be ambitious, loyal to authority figures, anxious in social settings, or craving of social status, any more than they can teach you to be a shameless, narcissistic con man or a serial killer. (“Johnny, I want you to enjoy killing people and drinking their blood.” “Yes, Mom!”) Kids only do what their parents or society tell them when, by happy coincidence, their desires resonate with the request; otherwise they ignore it.
当然,我们并不总是雄心勃勃。 有些人(偶然)比其他人抱有更高的野心或冒险门槛。 有些人通常比较谦虚。 您的父母不能教您成为一个有野心的人,忠于权威人物,在社交环境中感到焦虑或对社会地位的渴望,甚至比教您成为一个无耻,自恋的骗子或连环杀手更重要。 (“约翰尼,我希望你喜欢杀人并喝鲜血。”“是的,妈妈!”)当幸福的巧合使他们的愿望与要求产生共鸣时,孩子们只会做父母或社会告诉他们的事情; 否则他们会忽略它。
True AI should resemble the human mind, warts and all. We need to define for AI a full complement of human feelings and common sense as a priori MRs and RAs. A robot won’t get out of bed in the morning without motivations, desires, passions, drives and goals. Without these, nothing will be important, and it won’t learn and grow and serve.
真正的AI应该类似于人类的思想,疣和所有事物。 我们需要为AI定义人类感觉和常识的完整补充,作为先验 MR和RA。 没有动机,欲望,激情,动力和目标,机器人不会在早晨起床。 没有这些,没有什么是重要的,它将无法学习,成长和服务。
Unfortunately, as we assemble the complete list of human MRs and RAs, we won’t like what we see. The investigation will lead to fiercely judgmental debates and streams of denial. Which RAs do we use to find a mate, seek social status, or crave the adulation of crowds?
不幸的是,当我们收集人类MR和RA的完整列表时,我们不喜欢看到的内容。 调查将导致激烈的审判辩论和否认之流。 我们使用哪个RA来寻找伴侣,寻求社会地位或渴望拥护人群?
Implementing AI trait diversity will be equally politically charged. Why are some people ambitious, shameless narcissists, while others crave loyalty to authority, and seek kind words of validation from their leaders? Why do some want to conform, and fear being judged, while others rebel? Why do most good project managers have OCD?
实施AI特质多样性将在政治上同样受到责任。 为什么有些人野心勃勃,自以为是的自恋者,而另一些人却渴望忠诚于权威,并寻求领导者的肯定之词? 为什么有些人想要服从并害怕被审判,而另一些人却反叛呢? 为什么大多数优秀的项目经理都有强迫症?
Long ago, evolution “decided” that having a 3% prevalence of sociopaths in society enhances its long-term survival. It’s not hard to see why. Office managers who are forced to lay off staff may decide to enlist the assistance of their local uncaring sociopath to do their dirty work. It’s a sad and brutal division of labor in society. In any case, 3% of intelligent robots should also be sociopaths. True AI shouldn’t try to improve on human nature any more than gene editing should attempt to perfect the human genome and create designer babies. Discuss.
很久以前,进化论“决定”社交病患病率达到3%,可以提高其长期生存率。 不难看出为什么。 被迫解雇工作人员的办公室经理可能会决定寻求当地不关心的社会变态者的帮助来做他们的肮脏工作。 这是社会中悲伤而残酷的分工。 在任何情况下,3%的智能机器人也应具有社交能力。 真正的AI不应尝试改善人性,而基因编辑应尝试完善人类基因组并创造婴儿。 讨论。
In summary, the solution to achieving true AI is:
总而言之,实现真正的AI的解决方案是:
• Establish a baseline set of mental representations (MRs) to simulate the world in the mind• Design a series of autonomous mental rule agents (RAs) that continually act on those representations, many times per second• Start the simulation, and watch as emotions, motivations, and drives emerge to form a parallel universe of the mind. Say hello to your intelligent, feeling robot.
•建立一组基本的心理表征(MR),以模拟大脑中的世界•设计一系列自主的心理规则代理(RA),它们每秒多次多次作用于这些表征•开始模拟,并观察情绪动机和动力涌现,形成了一个平行的思维世界。 向您的智能机器人问好。
翻译自: https://medium.com/swlh/how-to-achieve-true-ai-640b493e3ddf
真正的人工智能能实现吗
所有评论(0)