DOC

What Comes After Web 2

By Derek Franklin,2014-03-26 14:56
14 views 0
What Comes After Web 2

    

    What Comes After Web 2.0?............................................................................................ 1 Self-Cooling Microchips ................................................................................................. 3 GPS That Never Fails ..................................................................................................... 4 Charging Batteries without Wires ...................................................................................... 5 从高性能计算排行榜看存储系统发展 ............................................................................. 7

    Hydrogen storage goes metal-free ..................................................................................... 9 Morphing Materials Take On New Shapes ........................................................................ 10 塑料电子挑战非晶硅 .................................................................................................. 11

    High-Definition Carbon Nanotube TVs ............................................................................ 11 Nanotube Computing Breakthrough ................................................................................ 13 Facing the Dangers of Nanotech ..................................................................................... 15

    Laser offers robust source for space-based lidar systems ..................................................... 17

    Integrated silicon spectrometers gain resolution, usability.................................................... 20

    Hybrid Microscope Probes Nano-Electronics .................................................................... 24 多光子共振相位共轭多波混频光谱学 ........................................................................... 24 Chiral liquid splits light by polarization............................................................................ 25 Silicon becomes a superconductor ................................................................................... 26 隧道磁阻效应原理的理论研究进展 .............................................................................. 26 Ultrasound generates intense mechanoluminescence........................................................... 27 X-ray laser focuses on tiny objects .................................................................................. 28 纳米金属塑性变形机制研究进展 .................................................................................. 29 金属纳米颗粒的可控合成和光学特性研究获得新进展 ................................................... 30 中心体组装方式的实验观测 ........................................................................................ 30 微阵列质量问题再成焦点 ............................................................................................................. 31

    量子点流动起来 ............................................................................................................................. 32 控制转基因表达变得更加容易 ..................................................................................... 33 新方法探明碳水化合物的生物学功效 ........................................................................... 34 新型siRNAs能够区分基因变异体 ............................................................................... 35 Chemo on the Brain ..................................................................................................... 36 一种反映另类拼接过程调控方式的新型RNA ........................................................... 36 主办单位:电子科技大学图书馆

    编员会:张晓东 黄思芬 汪育健 张毓晗 曹学艳

    李世兰 张宇娥 黄崇成 陈茂兴

    本期责编:张毓晗 编辑部电话:(028)83202340 20061230日出版

    What Comes After Web 2.0?

    Today's primitive prototypes show that a more intelligent Internet is still a long way off. By Wade Roush

    Many researchers and entrepreneurs are working on Internet-based knowledge-organizing technologies that stretch traditional definitions of the Web. Lately, some have been calling the technologies "Web 3.0." But really, they're closer to "Web 2.1."

    Typically, the name Web 2.0 is used by computer programmers to refer to a combination of a) improved communication between people via social-networking technologies, b) improved communication between separate software applications--read "mashups"--via open Web standards for describing and accessing data, and c) improved Web interfaces that mimic the real-time responsiveness of desktop applications within a browser window.

    To see how these ideas may evolve, and what may emerge after Web 2.0, one need only look to groups such as MIT's Computer Science and Artificial Intelligence Laboratory, the World Wide Web Consortium, Amazon.com, and Google. All of these organizations are working for a smarter Web, and some of their prototype implementations are available on the Web for anyone to try. Many of these projects emphasize leveraging the human intelligence already embedded in the Web in the form of data, metadata, and links between data nodes. Others aim to recruit live humans and apply their intelligence to tasks computers can't handle. But none are ready for prime time.

    The first category of projects is related to the Semantic Web, a vision for a smarter Web laid out in the late 1990s by World Wide Web creator Tim Berners-Lee. The vision calls for enriching every piece of data on the Web with metadata conveying its meaning. In theory, this added context would help Web-based software applications use the data more appropriately.

    My current Web calendar, for example, knows very little about me, except that I have appointments today at 8:30 A.M. and 4:00 P.M. A Semantic Web calendar would not only know my name, but would also have a store of standardized metadata about me, such as "lives in: Las Vegas," "born in: 1967," "likes to eat: Thai food," "belongs to: Stonewall Democrats," and "favorite TV show: Battlestar Galactica." It could then function much more like a human secretary. If I were trying to set up the next Stonewall Democrats meeting, it could sift through the calendars of other members and find a time when we're all free. Or if I asked the calendar to find me a companionable lunch date, it could scan public metadata about the friends, and friends of friends, in my social network, looking for someone who lives nearby, is of a similar age, and appreciates Asian food and sci-fi.

    Alas, there's no such technology yet, partly because of the gargantuan effort that would be required to tag all the Web's data with metadata, and partly because there's no agreement on the right format for metadata itself. But several projects are moving in this direction, including FOAF, short for Friend of a Friend. FOAF files, first designed in 2000 by British software developers

    1

    Libby Miller and Dan Brickley, are brief personal descriptions written in a standard computer language called the Resource Description Framework (RDF); they contain information such as a person's name, nicknames, e-mail address, homepage URL, and photo links, as well as the names of the people that person knows.

    I generated my own FOAF file this week using the simple forms at a free site called Foaf-a-matic and uploaded it to my blog site. In theory, other people using FOAF-enabled search software such as FOAF Explorer, or "identity hub" websites such as People Aggregator, will now be able to find me more easily.

    Eventually, more may be possible. For example, I could instantly create a network of friends on a new social-networking service simply by importing my FOAF file. But for now, there aren't a lot of ways to put your FOAF file to work.

    Another project attempting to extract more meaning from the Web is Piggy Bank, a joint effort by MIT's Computer Science and Artificial Intelligence Laboratory, MIT Libraries, and the World Wide Web Consortium. Piggy Bank's goal is to lift chunks of important information in data-heavy websites from their surroundings, so that Web surfers can make use of these info chunks in new ways. For example, office address information extracted from LinkedIn, a professional networking site, could be fed into Google Maps, creating a map of my colleagues' places of business. In this way, the Piggy Bank researchers hope, Web users can begin to get a taste of the Semantic Web in action, without having to wait for the authors of the billions of documents on the Web to create metadata. The curious can download a Piggy Bank extension for the Firefox Web browser; once the extension is installed, users can choose from a number of "screenscrapers" that extract information from specific sites like LinkedIn and Flickr (a popular photo-sharing site). Piggy Bank stores this "pure information," such as photos or contact names, inside the Web browser in RDF format, theoretically allowing users to mix data from independent sources to create their own "instant mashups" similar to the LinkedIn-Google Maps example.

    Unfortunately, there aren't yet any tools that make it easy for nonprogrammers to reuse the RDF data in such mashups. And in my own tests of Piggy Bank, the screenscrapers failed to activate. I'm sure that's because I missed something in the instructions--but the problem does illustrate how much more work is needed before such tools will be ready for public consumption. A second category of post-Web 2.0 projects focuses not on helping machines understand the meaning and the uses of existing Web content, but on recruiting real people to add their intelligence to information before it's used. The best known example is Amazon Mechanical Turk, a kind of high-tech temp agency introduced by the online retailer in 2005. The service allows people with tasks and questions that computers can't handle--for example, spotting inappropriate images in a collection of photos--to hire other Web users to help.

    The employment is extremely temporary--less than an hour per task, in most cases--and the pay is ridiculously low: solutions typically earn the worker only a few cents. But the point isn't to provide Internet addicts with a second income: it's to harness users' brainpower for a few spare moments to carry out simple tasks that remain far beyond the capabilities of artificial-intelligence software. (In fact, Amazon calls its project a form of "artificial artificial intelligence.") Some tasks are really marketing or product research in disguise. One questioner, for example, asks, "What would make your e-mail better?" Others offer better illustrations of the logic behind breaking up a big data-classification task and distributing it to hundreds of people. One task,

    2

    apparently from someone trying to make it possible to share information between various Yellow Pages-style directories, asks users to match categories from one directory--say, "Delicatessens"--with the closest equivalents in another--for example, "Delis" or "Small Restaurants." A computer couldn't tackle such a task without years of training on the mundane facts of human existence, such as the fact that a delicatessen is indeed one form of a small restaurant. A human, however, can find the right matches in seconds.

    Another project that attempts to persuade humans to add meaning to raw data is the Google Image Labeler. It entices users to label digital photographs according to their content by making the task into a simple game in which contestants must both collaborate and compete. Like Amazon Mechanical Turk, the Image Labeler has a community of fans who enjoy it as a game. And there's nothing wrong with making potentially dull tasks entertaining, if that's what it takes to motivate "workers." But the Image Labeler and the Mechanical Turk will have to grow beyond their toylike demonstration stage before they have a real impact on the Web's usability.

    It's not surprising that observers are reaching for new labels to describe the work going on beyond the boundaries of today's Web 2.0. But most of these projects are so far from producing practical tools--let alone services that could be commercialized--that it's premature to say they represent a "third generation" of Web technology. For that, judging from today's state of the art, we'll need to wait another few years.

    http://www.techreview.com/InfoTech/17845/page2/

    Self-Cooling Microchips

    Silicon ion pump creates a breeze

    As computer chips are crammed with more and more transistors, they run hotter, and traditional cooling mechanisms--heat sinks and fans--are having trouble keeping up. But future chips might cool themselves with a special gadget that uses ionized air and an electric field to create a tiny breeze. In a so-called ion pump, a high voltage across two electrodes strips electrons from molecules of oxygen and nitrogen in the air, creating positively charged ions. These ions flow to the negatively charged electrode, dragging along surrounding air molecules and cooling the chip. Researchers from Intel, the University of Washington Seattle, and Kronos Advanced Technologies of Redmond, WA, say a prototype can cool a two-square-millimeter spot on a surface by 25 ºC. Since the ion pump is made from silicon, it can be constructed as part of the chip-making process. Project leader Alex Mamishev, an electrical engineer at the University of Washington, says he expects the technology to be incorporated into commercial chips within two years.

    http://www.techreview.com/NanoTech/17694/

    3

    GPS That Never Fails

    A breakthrough in vision processing provides a highly accurate way to fill gaps in Global Positioning System availability.

    Drive down a Manhattan street, and your car's navigation system will blink in and out of service. That's because the Global Positioning System (GPS) satellite signals used by car navigation systems and other technologies get blocked by buildings. GPS also doesn't work well indoors, inside tunnels and subway systems, or in caves--a problem for everyone from emergency workers to soldiers.

    But in a recent advance that has not yet been published, researchers at Sarnoff, in Princeton, NJ, say their prototype technology--which uses advanced processing of stereo video images to fill in GPS gaps--can maintain location accuracy to within one meter after a half-kilometer of moving through so-called GPS-denied environments.

    That kind of resolution is a major advance in the field, giving GPS-like accuracy over distances relevant to intermittent service gaps that might be encountered in urban combat or downtown driving. "This is a general research problem in computer vision, but nobody has gotten the kind of accuracy we are getting," says Rakesh Kumar, a computer scientist at Sarnoff. The work was based partly on earlier research done at Sarnoff by David Nister, a computer scientist now at the University of Kentucky.

    Motilal Agrawal, a computer scientist at SRI International, in Menlo Park, CA, which is also developing GPS-denied location technologies, agrees, saying the advance essentially represents a five-fold leap in accuracy. "We haven't seen that reported error rate before," Agrawal says. "That's pretty damned good. For us a meter of error is typical over 100 meters--and to get that over 500 meters is remarkable and quite good."

    The approach uses four small cameras, which might eventually be affixed on a soldier's helmet or on the bumper of a car. Two cameras face forward, and two face backward. When GPS signals fade out, the technology computes location in 3-D space by making calculations from the objects that pass through its 2-D field of view as the camera moves.

    This is a multi-step task. The technology first infers distance traveled by computing how a series of fixed objects "move" in relation to the camera image. Then it adds up these small movements to compute the total distance. But since adding up many small motions can introduce errors over time--a problem called "drift"--the software identifies landmarks, and finds the same landmarks in subsequent frames to correct this drift. This part of the technology is called "visual odometry." Finally, the technology discerns which objects are moving and filters them out to avoid throwing off the calculations. It works even in challenging, cluttered environments, says Kumar. "The essential method is like how people navigate," says Kumar. "When people close their eyes while walking, they will swerve to the left or right. You use your eyesight to know whether you are going straight or turning. Then you use eyesight to recognize landmarks."

    While the general idea has been pursued for years, Sarnoff has achieved the one-meter accuracy

    4

    milestone only in the past three months--an advance that will be published soon, says Kumar. It's an important advance, says Frank Dellaert, a computer scientist at Georgia Tech. "This is significant," he says. "The reason is that adding these velocities over time accumulates error, and getting this type of accuracy over such a distance means that the 'visual odometry' component of their system is very high quality."

    Kumar says the technology also allows users--whether soldiers, robots, or, eventually, drivers--to build accurate maps of where they have been, and also to communicate with one another to build a common picture of their relative locations.

    Kurt Konolige, Agrawal's research partner at SRI, of which Sarnoff is a subsidiary, says one goal is to reduce the computational horsepower needed to do such intensive processing of video images--something Kumar's group is working on. But if the size and cost could be made low enough, he says, "you could also imagine small devices that people could wear as they moved around a city, say, or inside a large building, that would keep track of their position and guide them to locations."

    The technology, funded by the Office of Naval Research (ONR), is being tested by military units for use in urban combat. Dylan Schmorrow, the ONR program manager, says, "Sarnoff's work is unique and important because their technology adds a relatively low cost method of making visual landmarks with ordinary cameras to allow other sensors to work more accurately." Kumar says that while the first priority is to deliver mature versions of the technology to Sarnoff's military sponsors, the next step will be to try to produce a version that can work in the automotive industry. He says the technology has not yet been presented to car companies, but "we plan to do that." He adds that the biggest ultimate commercial application would be to beef up car navigation systems.

     http://www.techreview.com/InfoTech/17841/page2/

    Charging Batteries without Wires

    New MIT research reveals a way to send wireless energy to mobile phones and laptops. Small, battery-powered gadgets make powerful computing portable. Unfortunately, there's still a continual need to recharge the batteries of phones, laptops, cameras, and MP3 players by hooking them up to a tangle of wires. Now researchers at MIT have proposed a way to cut the cords by wirelessly supplying power to devices.

    "We are very good at transmitting information wirelessly," says Marin Soljačić, professor of

    physics at MIT. But, he says, historically, it's been much more difficult to transmit energy to power devices in the same way. Soljačić, who was a 2006 TR35 winner (see "2006 Young Innovator"),

    and MIT colleagues Aristeidis Karalis and John Joannopoulos have worked out a theoretical scheme for a wireless-energy transfer that could charge or power devices within a couple of meters of a small power "base station" plugged into an electrical outlet. They presented the approach on Tuesday at the American Institute of Physics's Industrial Physics Forum, in San Francisco.

    The idea of beaming power through the air has been around for nearly two centuries, and it is used to some extent today to power some types of radio-frequency identification (RFID) tags. The

    5

    phenomenon behind this sort of wireless-energy transfer is called inductive coupling, and it occurs when an electric current passes through wires in, for instance, an RFID reader. When the current flows, it produces a magnetic field around the wires; the magnetic field in turn induces a current in a nearby wire in, for example, an RFID tag. This technique has limited range, however, and because of this, it wouldn't be well suited for powering a roomful of gadgets.

    To create a mid-range wireless-energy solution, the researchers propose an entirely new scheme. In it, a power base station would be plugged into an electrical outlet and emit low-frequency electromagnetic radiation in the range of 4 to 10 megahertz, explains Soljačić. A receiver within a

    gadget--such as a power-harvesting circuit--can be designed to resonate at the same frequency emitted by the power station. When it comes within a couple of meters of the station, it absorbs the energy. But to a nonresonant device, the radiation is undetectable.

    Importantly, the energy that's accessed by the device is nonradiative--that is, it doesn't propagate over great distances. This is due to the low frequency of the radio waves, says John Pendry, professor of physics at Imperial College, in London. Electromagnetic radiation comes in two flavors: near-field and far-field. The intensity of low-frequency radiation drops quickly as a person moves farther away from the base station. In other words, the far-field radiation that propagates out in all directions isn't very strong at low frequencies, hence is essentially useless. (Wi-Fi signals, in comparison, are able to remain strong for tens of meters because they operate at a higher frequency of 2.4 gigahertz.)

    However, the near-field radiation, which stays close to the base station, contains quite a bit of energy. "If you don't do anything with it, it just sits there," says Pendry. "It doesn't leak away." This bound-up energy, which extends for a couple of meters, is extracted when a resonant receiver on a gadget comes within range.

    At this point, the work is still theoretical, but the researchers have filed patents and are working to build a prototype system that might be ready within a year. Even without a prototype, though, the physics behind the concept is sound, says Freeman Dyson, professor of physics at the Institute for Advanced Study, in Princeton, NJ. "It's a nice idea and I have no reason to believe that it won't work."

    Pendry suspects that people might be squeamish about the idea of wireless energy radiating throughout the air. "Whenever there's powerful energy sources, people worry about safety," he says. Depending on the application, he says, either the electric or the magnetic portion of the near-field radiation could be handy. Using the electric field would pose a health risk, and would be better employed in applications in which people aren't nearby, he says. Conversely, using the magnetic field would be much safer and could be implemented just as easily. "I can't think of any reason to worry [about health concerns]," he says, "but people will."

    Soljačić also suspects that the wireless power systems would be safe, based on his calculations and

    on the known health effects of low-frequency radio waves.

    Ideally, says Soljačić, the system would be about 50 percent as efficient as plugging into an outlet,

    which would mean that charging a device might take longer. But the vision for this sort of wireless-energy setup, he says, is to place power hubs on the ceiling of each room in the house so that a phone or laptop can be constantly charging from any location in a home.

    http://www.techreview.com/InfoTech/17791/page2/

    6

    随着计算机应用技术、硬件技术和网络随着万兆以太网的推广,以太网作为互联技

    技术的日新月异,存储技术也在飞速发展。术值得密切关注。 在发展过程中,经历了以CPU 和以内存为 ? 随着硬件成本的迅速降低,系统规中心的发展阶段之后,计算机系统已经进入模越来越大,数百个计算节点的系统已经司

    到以存储为中心的发展阶段。这使得存储系空见惯。 统逐渐地不再直接依附于计算机或服务器 本身,而成为了相对独立的系统。但是,存 由于高性能计算依然会持续发展,这些

    储系统与计算系统依然有着密不可分的关发展趋势在一定时间内也依然会不断持续。

    系,特别是在以数据处理为中心的今天。面在应用需求和应用技术日益成熟的今天,这

    向未来,高性能计算领域(HPC)对于存储些发展趋势对于中高端存储技术的发展提

    技术的发展趋势有着重要的推动和牵引作出了重要的影响和更高的需求。下面,我们

    用。鉴于HPC 的这种影响,本文主要从HPC 针对相关发展趋势,分析其影响所在。

    市场发展的角度,试图分析未来存储系统的 ? 应用领域日趋广泛,特别是在各种发展趋势。 工业应用领域,发展尤为迅速。其中,在地

     2005 年底国际高性能计算行业发布的球物理、半导体、大气气象等以数据处理为

    TOP 500 分析报告,以下分别对未来全球高核心的应用领域迅速崛起。高性能计算的日

    性能计算和存储系统的技术发展趋势,以及益成熟与数据处理的巨大需求,直接推动了

    对存储技术的发展远景进行总体分析与展面向高性能计算的存储系统发展。在应用技

    望。 术日益成熟的今天,应用系统与存储系统的

     持续、可集成性需求也日益强烈。IP/ 以太

     在过去的二十年中,高性能计算发展相网络技术的开放性和可集成性成为IP 网络

    当迅速,而且后续发展势头依然强劲。纵观存储技术发展的直接推动力。 其发展历史,高性能计算已表现出相当明确 ? 针对集群系统技术日益成熟,并成的发展趋势: 为主流的现状,对于集群文件系统的需求日

     ? 首先,x86 及其兼容的处理器在过益强劲。集群文件系统应该具备可以支持大

    去的发展中不断占有更大的市场份额,逐渐规模数据的有效传输,支持多计算节点间的

    形成了压倒性优势。 数据有效共享。此外,计算节点的重复安装、

     ? Linux 操作系统不断蚕食其它操作配置等管理工作降低了集群系统的使用效

    系统的市场,已经成为HPC 中的标准率。因此,以存储为中心的集群系统高效管

    统选择。 理技术已经成为研发热点。

     ? 工业应用已经成为主体应用,表明 ? 以太网技术作为互联技术已经成为HPC 应用需求和应用技术越来越旺盛和高性能计算的主流技术(占49.8%),而且成熟。其中,数据处理成为核心问题,如在其比例仍然在扩大。一方面,说明了以太网

    地球物理、大气气象、生物信息等领域。微技术的日益成熟(包括聚合带宽、单端口带

    软在HPC 领域的推动是这方面的明证,值宽和成本、技术集成度等);另一方面,说

    得充分重视。 明了即使在高性能计算领域,用户对于系统

     ? 经过长时间的发展,集群系统结构成本(包括采购、管理和运营)也是较为敏

    已经成为主流结构,具有压倒性优势。 感的。这同样将会影响存储技术的选择,特

     ? 千兆以太网技术已经成为发展互联别是存储系统成本在整个系统成本中的所

    技术的首选,占有了近乎一半的市场份额。占比例日渐提高的今天。若将互联网络技术

    7

与存储网络技术统一,势必大大降低系统总存储容量和丰富的数据管理功能,以此将推

    体成本,提高整体系统的集成度。IP/ 以太PB 级存储系统的研究热潮。大容量、高

    网络技术的成熟程度,特别是其开放性和可性能的存储与优异的性价比形成反差,进一

    集成性,使得IP 网络存储技术具有特殊的步强化了存储与数据分离的趋势。在分离的

    诱惑力。 基础上,使存储和数据能够资源化,基于存

     ? 大规模系统:随着软、硬件成本的储对象技术和层次化存储技术,增强存储系

    下降,计算设备数量日益增加,动辄数十乃统的层次化,推动数据的集中管理和数据的

    至于数百台服务器的计算环境已为常见。由按需部署等技术的发展。 此出现以下一些问题:

     ?计算机数量的增加,不仅导致计算系 为了满足各种应用需求,存储系统已经

    统本身能耗过高,而且相应机房制冷设备的开始向高速发展的阶段迈进。目前主要分为

    能耗也同比例增长。与数量的增加相反,计以下几个发展方向:虚拟存储系统、服务部

    算机的利用效率反而不断降低。这种高昂成署系统、集群文件系统和数据管理系统。 本和低利用率,不仅限制了现有规模系统应 ? 虚拟存储系统:国外的IBMHP用领域的进一步拓展,而且限制了计算机系StoreAgeVERITASEMC StorageTek 统规模的持续增加。这直接引发了存储与计等厂商,已把虚拟存储(虚拟磁盘存储、共

    算分离技术,和以此为基础的计算资源部署享虚拟磁盘阵列和虚拟存储管理)作为其核

    技术的发展。通过两者的分离,有效地将计心技术。

    算设备资源化,并且按照需求调度计算资 ? 服务部署系统: 现已有IBM 实验源,从而大大提高计算资源的利用率,降低室研发的OceanoPurdue 大学研究的对计算资源总体数量的需求。 SODA Duke 大学提出的COD 系统。

     ? 计算机管理问题越发严重。计算机 ? 集群文件系统:在国外,已呈现出

    的安装、配置、维护成本,已经成为不可忽LustreLinux Cluster)设计并实现的一视的成本组成部分。为了解决系统管理与维个基于存储区域网的集群文件系统、IBM 护问题,以存储与计算分离技术为基础的层公司的Storage Tank 以及基于Storage Tank 叠式快照和虚拟共享存储卷管理等多种存TotalStorage SAN File System Panasas 储核心技术,以及相关的备份和恢复技术迅公司的PanFS 系统。 速崛起。 ? 数据管理系统:EMC 和国外厂商

     ? 数据通道已经成为瓶颈。数量众多基于ILM 理念推出了各种管理系统。此外,

    的计算节点、高并发的数据访问,势必给存数据搜索和数据索引技术发展迅速。Google 储系统带来极高的压力,从而要求存储网络和微软都推出了桌面搜索工具。 及存储系统可以满足聚合高效率、高带宽的 相对应的是,依托在中科院计算所的国

    数据访问需求。高聚合带宽的需求,直接推家高性能计算机工程技术研究中心,在上述

    动了集群存储技术的发展。此外,IP/ 以太发展方向上,相继研发出具有自主知识产权

    网络技术的成熟程度和其高聚合带宽的优的蓝鲸虚拟存储系统、蓝鲸服务部署系统、

    势,使IP 存储技术得以发展。随着万兆以蓝鲸集群文件系统和蓝鲸数据备份系统。 太网络的普及,将一定程度上解决以太网络 在社会的迅速发展中,伴随着各行业中

    单端口带宽低的问题,并进一步强化IP 数据量的不断增加,存储技术的研究和开发

    络高聚合带宽的优势。 热潮也将迅速升温。面向未来,以高性能计

     ? 数据量日渐庞大。众多的计算节点算为龙头牵引,在iSCSI SATA/SAS 等技具有巨大的数据处理能力,如地球物理、大术的作用下,存储系统将朝着IP 存储技术、气气象、遥感信息等相关应用领域数据量巨部署技术、虚拟化技术、集群技术、对象化

    大,这些需求要求存储系统能够提供巨大的技术、数据管理技术等多个方面迅速发展。

    http://www.ict.ac.cn/diffusive/channel/detail1274.asp

    8

    Hydrogen storage goes metal-free

    Researchers in Canada have developed a new solid material that can store and release

    hydrogen near room temperature without involving a transition metal. The discovery could

    lead to the development of low-cost and lightweight materials for the onboard storage of

    hydrogen fuel in cars (Science 314 1124).

    Hydrogen is often touted as an environmentally-friendly fuel for road vehicles of the future. When

    consumed in a fuel-cell powered electric car, it produces nothing more than pure water as a

    by-product. However, many technological challenges remain before it can be used commercially.

    In particular, hydrogen has a low energy density compared to conventional fuels and therefore it

    must be stored as a liquid or an extremely high-pressure gas to ensure that reasonable distances

    can be travelled before refuelling.

    These storage methods are both expensive and cumbersome and some researchers believe that it

    would be better to store hydrogen within solid materials that can absorb large quantities of the gas.

    In such materials a chemical reaction splits the hydrogen molecule into two hydrogen atoms at the

    surface of the material. The atoms then migrate into the bulk of the material and form a

    metal-hydride compound. The hydrogen can be released by heating the material.

    Current materials that can easily absorb and discharge hydrogen near room temperature contain

    transition metals and the storage process must be catalysed by expensive precious metals such as

    platinum. This makes them too heavy and too expensive for commercial use.

    But now Douglas Stephan and colleagues at the University of Windsor have developed the first

    non-metallic material that can absorb and store hydrogen at room temperature, releasing the gas

    when heated above 100 ºC. The material contains pairs of boron and phosphorous atoms, which

    are separated by a ring of carbon atoms. This structure has a net neutral charge but the boron and

    phosphorous atoms carry a positive and negative charge respectively. The researchers believe that

    this property allows the two atoms to work together to split the gaseous hydrogen molecules into

    two hydrogen atoms, which are then covalently bound within the material. According to Stephan,

    this mechanism is known as ―heterolytic cleavage‖ and has only been observed in transition-metal

    complexes.

    Although the metal-free material offers hope of lighter and cheaper storage materials, Stephan

    admits that there is still a long way to go. Crucially, the material stores less than 0.25% of its

    weight in hydrogen, which is far off the US Department of Energy‘s target of 6% set for 2010 and

    the 2.5% achieved by some transition metal materials. Stephan describes the DOE target as a

    ―challenging problem‖ and the researchers are currently exploring alternative molecular

    structures.

    http://physicsweb.org/articles/news/10/11/18/1

    9

Report this document

For any questions or suggestions please email
cust-service@docsford.com