當前位置

首頁 > 英語閲讀 > 雙語新聞 > 谷歌DeepMind團隊打造可微分神經計算機

谷歌DeepMind團隊打造可微分神經計算機

推薦人: 來源: 閲讀: 8.9K 次

谷歌DeepMind團隊打造可微分神經計算機

Google’s artificiAl intelligence arm has made a breakthrough in the development of thinking computers by creating a learning machine that combines a “neural network” computing system with conventional computer memory.

谷歌(Google)的人工智能部門在開發思維計算機方面取得一項突破,他們創造了一台結合“神經網絡”計算系統與常規計算機內存的學習機器。

Scientists at DeepMind, the tech group’s London-based AI unit, have built a “differentiable neural computer”, or DNC, that for the first time can solve small-scale problems without prior knowledge, such as planning the best route between distant stations on the London Underground or working out relationships between relatives on family trees.

這家高科技集團設在倫敦的人工智能部門DeepMind的科學家們,打造了一台“可微分神經計算機”(DNC),首次能夠在沒有先驗知識的情況下解決各種小規模問題,比如在兩個相距遙遠的倫敦地鐵車站之間規劃最佳路線,或者釐清家譜上親屬之間的關係。

Neural networks — connected systems modelled on biological networks such as the brain — have played a big role in the recent and rapid progress in AI research. They are excellent at deducing patterns, for example, to enable speech recognition in digital assistants such as Google Voice or Apple’s Siri. But until now they have only been able to access the data contained within their own network. In the journal Nature the 20-strong DeepMind team said the DNC provides neural networks with access to previously incompatible external data, such as text encoded in conventional digital form.

神經網絡——以大腦這樣的生物網絡為藍本打造的互連繫統——在近期人工智能研究的快速進展中起到了很大的作用。它們非常善於推導出模式,使谷歌語音(Google Voice)或蘋果(Apple)的Siri等數字助理的語音識別成為可能。但是,此前它們只能訪問自身網絡所含的數據。20人的DeepMind團隊在《自然》(Nature)期刊發表的論文中表示,DNC提供了神經網絡,可以訪問之前不兼容的外部數據,比如以常規數字格式編碼的文本。

“The trouble is that the memory in a neural network is bound up within the computation itself, which makes it rather fragile and hard to scale up,” said Alex Graves, head of the DNC project. “We decided that the way to make it more robust is to separate out the memory, so that we can expand it without affecting the processor.”

“麻煩在於,神經網絡中的記憶被綁定在計算內部,這使得它相當脆弱,難以擴展,”DNC項目負責人亞歷克斯•格雷夫斯(Alex Graves)表示。“我們得出結論,使其更強大的方法是分離記憶,以便我們可以擴展它,而不會影響處理器。”

Jay McClelland, director of Stanford University’s Centre for Mind, Brain and Computation, called the DeepMind paper “a very interesting and important milestone in AI research”.

斯坦福大學(Stanford University)心智、腦和計算中心(Center for Mind, Brain and Computation)主任傑伊•麥克利蘭(Jay McClelland)稱,DeepMind的這篇論文是“人工智能研究中非常有意思的重要里程碑”。

However, to make the DNC more useful in the real world than existing AI systems, it will need to be expanded to access far larger memories. “That will require a lot of engineering work,” said Mr Graves. “This is a research paper and I don’t want to speculate too much about where this is going in terms of practical problems.”

然而,為了使DNC在現實世界中比現有的人工智能系統更有用,它將需要擴展,以訪問大得多的存儲器。“這將需要大量的工程工作,”格雷夫斯説。“這是一篇研究論文,我不想過分推測這對解決實際問題有多大指導意義。”

Even so, independent computer scientists who reviewed the paper before publication said the range of applications for a general purpose DNC could be vast. Possible applications might include generating video commentaries and extracting meaning from text.

即使如此,在發表之前評審了這篇論文的獨立計算機科學家表示,一般用途DNC的應用範圍可能十分巨大。潛在的應用可能包括生成視頻新聞報道和從文本中提取涵義。

DeepMind was founded in London as an AI start-up in 2010 and acquired by Google for £400m in 2014.

DeepMind於2010年在倫敦成立,是一家人工智能初創企業,2014年被谷歌以4億英鎊收購。