NanoTechIndiana

The Latest Technology News and Information

nanotechnology2
nanotechnology2

The Evolution of MicroElectronics

| 0 comments

The last 25 years have witnessed astonishing advances in the fields of microelectronics and computation. The first integrated circuit microprocessor, the Intel 4004, was able to perform roughly 5000 binary-coded decimal additions per second with a total power consumption of about 10 W (~500 additions per joule) in 1971, whereas modern microprocessors can perform ~3 x [10.sup.6] additions per joule. The 1997 National Technology Roadmap for Semiconductors (1) calls for an additional factor of [10.sup.3] increase in the computational efficiency by the year 2012. If this goal is attained, then performance of the silicon-based integrated circuit will have improved by nearly seven orders of magnitude in 40 years, using energy consumed per operation as a metric, with a single manufacturing paradigm. Although complementary metal oxide semiconductor (CMOS) technology is predicted by many researchers to run into significant physical limitations shortly after 2010 (2), the energy cost of an addition operation will still be nowhere near any fundamental physical limit. A crude estimate of the energy required to add two 10-digit decimal numbers, based on a thermodynamic analysis of nonreversible Boolean logic steps (3, 4) is ~100*k*T*1n(2), which implies that 3 x [10.sup.18] additions per joule can be performed at room temperature without any reversible steps. Thus, there are potentially eight orders of magnitude in computational energy efficiency in a nonreversible machine available beyond the limits of CMOS technology. To achieve these further advances will require a totally different type of computational machinery, but knowing that such a system is in principle possible provides a strong incentive to hunt for it. The requirement for inventing a new technology paradigm has created exciting research opportunities for physical and biological scientists as well as for electrical engineers. Indeed, much of the current interest in interdisciplinary research in areas such as nanofabrication, self-assembly, and molecular electronics is being driven by this search for a new archetype computer.

A number of alternatives to standard Si-based CMOS devices have been proposed, including single-electron transistors , quantum cellular automata , neural networks , and molecular logic devices . A common theme that underlies many of these schemes is the push to fabricate logic devices on the nanometerlength scale. Such dimensions are more commonly associated with molecules than integrated circuits, and it is not surprising that chemically assembled (or bottom-up) configurations, rather than artificially drawn (or top-down) structures created with lithography, are expected to play an increasingly important role in the fabrication of electronic devices and circuits. We define chemical assembly as any manufacturing process whereby various electronic components, such as wires, switches, and memory elements, are chemically synthesized (a process often called “self-assembly”) and then chemically connected together (by a process of “self-ordering”) to form a working computer or other electronic circuit.

Several problems will arise when such an assembly is used for some computational task. Some fraction of the discrete devices will not be operational because of the statistical yields of the chemical syntheses used to make them, but it will not be feasible to test them all to select out the bad ones. In addition, the system will suffer an inevitable and possibly large amount of uncertainty in the connectivity of the devices. Given these problems, how does one communicate with the system from the outside world in a reliable and predictable way and be assured that it is performing error-free computations? Furthermore, because one goal of nanoscale technology is to provide a huge number (for example, a mole) of devices for a system, how does one impose an organization that allows the entire ensemble to operate efficiently? A self-ordering process is only likely to produce fairly regular structures with low information content, but real computers built today have great complexity imposed by human designers. A chemically assembled machine must be able to reproduce the arbitrary complexity demanded for general-purpose computation.

In engineering, the answer to low but nonzero failure rates is to design redundancy into the system. The history of integrated-circuit technology has been that wiring and interconnects have become increasingly more expensive with respect to active devices. Should nanotechnology give us extraordinarily “cheap” but occasionally defective devices, then nearly all expense will be shifted to the wires and connections. Recent research at Hewlett-Packard (HP) Laboratories with an experimental computer, code-named “Teramac”, has illuminated several of these issues. Although Teramac was constructed with conventional silicon integrated-circuit technology, many of the problems associated with this machine are similar to the challenges that are faced by scientists exploring nanoscale paradigms for electronic computation. In order to keep the construction costs as low as possible, the builders of Teramac intentionally used components that were cheap but defective, and inexpensive but error-prone technologies were used to connect all the components. Because of the physical architecture chosen to implement powerful software algorithms, Teramac could be configured into a variety of extremely capable parallel computers, even in the presence of all the defects. Thus, we define defect tolerance as the capability of a circuit to operate as desired without physically repairing or removing random mistakes incorporated into the system during the manufacturing process. The major surprises of the Teramac project were that the compiling time for new logical configurations was linear with respect to the number of resources used and the execution time for many algorithms was surprisingly fast, given the large number of defects in the machine. The architecture of Teramac and its implementation of defect tolerance are relevant to possible future chemically assembled circuits.

 

Leave a Reply