當前位置

首頁 > 英語閱讀 > 英語閱讀理解 > 穿越時空: 你看你看, 未來的"臉"

穿越時空: 你看你看, 未來的"臉"

推薦人: 來源: 閱讀: 8.24K 次

穿越時空: 你看你看, 未來的

Nothing says "the future" like a disembodied head. As developers and designers begin churning out the next generation of games and entertainment, the pace of technology demos showing what types of computer-generated graphics will soon be possible has picked up. And that means one thing: more creepy-yet-astonishing 3D-generated heads.
沒有什麼能比一顆活生生的頭部更能代言“未來”。隨着開發及設計人員紛紛着手研發下一代的遊戲娛樂系統,這項預示着最新計算機生成圖像效果的技術已經取得了進展。而這意味着一件事——我們將看到更多令人毛骨悚然、卻又不由驚歎的3D合成頭像。

Activision (ATVI) is showing off new technology at the annual Game Developer's Conference, taking place in San Francisco this week. The rendering techniques and code that create life-like animation were unveiled by the gaming giant's research and development division yesterday. The animated character shown here is being rendered in real-time on current video card hardware, suggesting innovations like these could be showing up in commercial products sooner rather than later.
美國動視(Activision)在上週於舊金山舉辦的一年一度的遊戲開發者大會(Game Developer's Conference)上展示了他們的最新技術成果。這家遊戲開發巨頭的研發部門昨天發佈了可以生成的渲染技術及代碼。此處所示的動態人物是在顯卡設備上實時渲染而成的。它意味着類似的創新技術要不了多久就會出現在各類商業性產品上。

"We will show how each detail is the secret for achieving reality," wrote researcher Jorge Jimenez on his blog, before the presentation. "For us, the challenge goes beyond entertaining; it's more about creating a medium for better expressing emotions and reaching the feelings of the players. We believe this technology will bring current generation characters, into next generation life."
“我們將讓大家看到,每一處細節纔是表現真實感的祕訣。”大會演示之前,研究人員喬治?吉梅內茲在自己的博客上寫道:“對我們來說,這場挑戰面向的不僅僅是娛樂應用,更多的是要打造一種媒介,能夠更好地表現情緒、引發玩家共鳴。我們相信,這項技術能夠爲現世代的遊戲角色注入次世代的生命。”

Activision isn't alone. Chipmaker NVIDIA (NVDA) recently touted real-time face-rendering at its GPU Technology Conference in California. The program, dubbed Face Works, employs face- and motion-capture technology developed at the University of Southern California's Institute of Creative Technology. The center's Light Stage process records data to within a tenth of a millimeter using photography that captures the geometry of an actor's face. Light transmission through skin -- the key to rendering subtle emotional cues like blushing -- and reflections can be recreated as well.
抱着這種想法的開發商並非僅只動視一家。芯片製造商英偉達(NVIDIA)最近在該公司於加利福尼亞召開的GPU技術大會(GPU Technology Conference)上大力鼓吹實時面部渲染技術。這套名爲“面子工程”(Face Works)的程序採用了美國南加州大學(University of Southern California)創意技術研究所(Institute of Creative Technology)開發的面部及動作捕捉技術。它的核心繫統“燈光舞臺”(Light Stage)將處理通過攝影捕捉到的、精確到0.1毫米以內的演員面部參數。它也可模擬出光線穿過皮膚進行傳播以及反射的效果,渲染臉紅等各種精細的情緒表現。

At Sony's (SNE) Playstation 4 launch even earlier this year, actor Max von Sydow made a brief appearance on stage -- as an interactive 3D model. David Cage, founder of innovative studio Quantic Dream, demoed what kinds of graphics would be possible on the console maker's next hardware release. (Why so many old men? It's not clear, but it may have something to do with the complexity of rendering wrinkles that move and bend.)
索尼(Sony)出產的遊戲機Playstation 4於今年更早時候發佈時,馬克斯?馮?西多曾上臺露了下臉——以互動式3D人物的形式。創意工作室Quantic Dream創始人大衛?凱奇演示了遊戲機製造商推出下一部硬件設備時可能呈現出怎樣的圖像。(爲什麼會有這麼多老年男性的面部圖像?具體原因不明,但是或許跟渲染各類延展曲折的皺紋難度很高有關。)

All of this is likely to kickstart another round of debate about the so-called "uncanny valley." That concept suggests that when human replicas -- either robots or in computer renderings -- begin to look realistically but not perfectly human it can make real-life observers feel queasy or revolted. (The "valley" in questions is the dip in a graph of the comfort level of humans presented with a rendered human likeness.) As of yet, that hasn't stopped engineers from pushing the boundaries of what's technology possible -- perhaps in hopes of leapfrogging over the problem entirely.
所有這一切似乎有望掀起一輪關於所謂的“恐怖谷”(uncanny valley)的爭論。這項理論認爲,人類複製品——機器人或是計算機渲染出的角色——在視覺上開始變得越來越寫實卻又不是純正的人類,這會讓真實的人類看到他們時感到反胃或憎惡。(將人類面對不同渲染程度的仿製人類所表現出的心理舒適度繪製成一張圖表後,問題中的“谷”指的就是圖表中的低谷區。)這個問題暫時尚未阻擋住工程師拓展技術疆域極限的嘗試——或許他們指望着能夠實現蛙跳式的進展,從而從根本上繞過這個問題吧。