HyperTransport





HyperTransport Consortium logo


HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a technology for interconnection of computer processors. It is a bidirectional serial/parallel high-bandwidth, low-latency point-to-point link that was introduced on April 2, 2001.[1] The HyperTransport Consortium is in charge of promoting and developing HyperTransport technology.


HyperTransport is best known as the system bus architecture of modern AMD central processing units (CPUs) and the associated Nvidia nForce motherboard chipsets. HyperTransport has also been used by IBM and Apple for the Power Mac G5 machines, as well as a number of modern MIPS systems.


The current specification HTX3.1 remains competitive for 2014 high speed (2666 and 3200 MT/s or about 10.4 GB/s and 12.8 GB/s) DDR4 RAM and slower terabyte (around 1 GB/s [1] similar to high end PCIe SSDs ULLtraDIMM flash RAM) technology[clarification needed]—a wider range of RAM speeds on a common CPU bus than any Intel front-side bus. Intel technologies require each speed range of RAM to have its own interface, resulting in a more complex motherboard layout but with fewer bottlenecks. HTX 3.1 at 26 GB/s can continue to serve as a unified bus for as many as four DDR4 sticks running at the fastest proposed speeds. Beyond that DDR4 RAM may require two or more HTX 3.1 buses diminishing its value as unified transport.




Contents





  • 1 Overview

    • 1.1 Links and rates


    • 1.2 Packet-oriented


    • 1.3 Power-managed



  • 2 Applications

    • 2.1 Front-side bus replacement


    • 2.2 Multiprocessor interconnect


    • 2.3 Router or switch bus replacement


    • 2.4 Co-processor interconnect


    • 2.5 Add-on card connector (HTX and HTX3)


    • 2.6 Testing



  • 3 Infinity Fabric


  • 4 Implementations


  • 5 Frequency specifications


  • 6 Name


  • 7 See also


  • 8 References


  • 9 External links




Overview



Links and rates


HyperTransport comes in four versions—1.x, 2.0, 3.0, and 3.1—which run from 200 MHz to 3.2 GHz. It is also a DDR or "double data rate" connection, meaning it sends data on both the rising and falling edges of the clock signal. This allows for a maximum data rate of 6400 MT/s when running at 3.2 GHz. The operating frequency is autonegotiated with the motherboard chipset (North Bridge) in current computing.


HyperTransport supports an autonegotiated bit width, ranging from 2 to 32 bits per link; there are two unidirectional links per HyperTransport bus. With the advent of version 3.1, using full 32-bit links and utilizing the full HyperTransport 3.1 specification's operating frequency, the theoretical transfer rate is 25.6 GB/s (3.2 GHz × 2 transfers per clock cycle × 32 bits per link) per direction, or 51.2 GB/s aggregated throughput, making it faster than most existing bus standard for PC workstations and servers as well as making it faster than most bus standards for high-performance computing and networking.


Links of various widths can be mixed together in a single system configuration as in one 16-bit link to another CPU and one 8-bit link to a peripheral device, which allows for a wider interconnect between CPUs, and a lower bandwidth interconnect to peripherals as appropriate. It also supports link splitting, where a single 16-bit link can be divided into two 8-bit links. The technology also typically has lower latency than other solutions due to its lower overhead.


Electrically, HyperTransport is similar to low-voltage differential signaling (LVDS) operating at 1.2 V.[2] HyperTransport 2.0 added post-cursor transmitter deemphasis. HyperTransport 3.0 added scrambling and receiver phase alignment as well as optional transmitter precursor deemphasis.



Packet-oriented


HyperTransport is packet-based, where each packet consists of a set of 32-bit words, regardless of the physical width of the link. The first word in a packet always contains a command field. Many packets contain a 40-bit address. An additional 32-bit control packet is prepended when 64-bit addressing is required. The data payload is sent after the control packet. Transfers are always padded to a multiple of 32 bits, regardless of their actual length.


HyperTransport packets enter the interconnect in segments known as bit times. The number of bit times required depends on the link width. HyperTransport also supports system management messaging, signaling interrupts, issuing probes to adjacent devices or processors, I/O transactions, and general data transactions. There are two kinds of write commands supported: posted and non-posted. Posted writes do not require a response from the target. This is usually used for high bandwidth devices such as uniform memory access traffic or direct memory access transfers. Non-posted writes require a response from the receiver in the form of a "target done" response. Reads also require a response, containing the read data. HyperTransport supports the PCI consumer/producer ordering model.



Power-managed


HyperTransport also facilitates power management as it is compliant with the Advanced Configuration and Power Interface specification. This means that changes in processor sleep states (C states) can signal changes in device states (D states), e.g. powering off disks when the CPU goes to sleep. HyperTransport 3.0 added further capabilities to allow a centralized power management controller to implement power management policies.



Applications



Front-side bus replacement


The primary use for HyperTransport is to replace the Intel-defined front-side bus, which is different for every type of Intel processor. For instance, a Pentium cannot be plugged into a PCI Express bus directly, but must first go through an adapter to expand the system. The proprietary front-side bus must connect through adapters for the various standard buses, like AGP or PCI Express. These are typically included in the respective controller functions, namely the northbridge and southbridge.


In contrast, HyperTransport is an open specification, published by a multi-company consortium. A single HyperTransport adapter chip will work with a wide spectrum of HyperTransport enabled microprocessors.


AMD uses HyperTransport to replace the front-side bus in their Opteron, Athlon 64, Athlon II, Sempron 64, Turion 64, Phenom, Phenom II and FX families of microprocessors.



Multiprocessor interconnect


Another use for HyperTransport is as an interconnect for NUMA multiprocessor computers. AMD uses HyperTransport with a proprietary cache coherency extension as part of their Direct Connect Architecture in their Opteron and Athlon 64 FX (Dual Socket Direct Connect (DSDC) Architecture) line of processors. The HORUS interconnect from Newisys extends this concept to larger clusters. The Aqua device from 3Leaf Systems virtualizes and interconnects CPUs, memory, and I/O.



Router or switch bus replacement


HyperTransport can also be used as a bus in routers and switches. Routers and switches have multiple network interfaces, and must forward data between these ports as fast as possible. For example, a four-port, 1000 Mbit/s Ethernet router needs a maximum 8000 Mbit/s of internal bandwidth (1000 Mbit/s × 4 ports × 2 directions)—HyperTransport greatly exceeds the bandwidth this application requires. However a 4 + 1 port 10 Gb router would require 100 Gbit/s of internal bandwidth. Add to that 802.11ac 8 antennas and the WiGig 60 GHz standard (802.11ad) and HyperTransport becomes more feasible (with anywhere between 20 and 24 lanes used for the needed bandwidth).



Co-processor interconnect


The issue of latency and bandwidth between CPUs and co-processors has usually been the major stumbling block to their practical implementation. Recently, co-processors such as FPGAs have appeared that can access the HyperTransport bus and become first-class citizens on the motherboard. Current generation FPGAs from both main manufacturers (Altera and Xilinx) directly support the HyperTransport interface, and have IP Cores available. Companies such as XtremeData, Inc. and DRC take these FPGAs (Xilinx in DRC's case) and create a module that allows FPGAs to plug directly into the Opteron socket.


AMD started an initiative named Torrenza on September 21, 2006 to further promote the usage of HyperTransport for plug-in cards and coprocessors. This initiative opened their "Socket F" to plug-in boards such as those from XtremeData and DRC.



Add-on card connector (HTX and HTX3)




Connectors from top to bottom: HTX, PCI-Express for riser card, PCI-Express


A connector specification that allows a slot-based peripheral to have direct connection to a microprocessor using a HyperTransport interface was released by the HyperTransport Consortium. It is known as HyperTransport eXpansion (HTX). Using a reversed instance of the same mechanical connector as a 16-lane PCI-Express slot (plus an x1 connector for power pins), HTX allows development of plug-in cards that support direct access to a CPU and DMA to the system RAM. The initial card for this slot was the QLogic InfiniPath InfiniBand HCA. IBM and HP, among others, have released HTX compliant systems.


The original HTX standard is limited to 16 bits and 800 MHz.[3]


In August 2008, the HyperTransport Consortium released HTX3, which extends the clock rate of HTX to 2.6 GHz (5.2 GT/s, 10.7 GTi, 5.2 real GHz data rate, 3 MT/s edit rate) and retains backwards compatibility.[4]



Testing


The "DUT" test connector[5] is defined to enable standardized functional test system interconnection.



Infinity Fabric


Infinity Fabric is a superset of HyperTransport announced by AMD in 2016 as an interconnect for its GPUs and CPUs. It is also usable as interchip Interconnect for communication between CPUs and GPUs.[6][7] The company said the Infinity Fabric would scale from 30 GB/s to 512 GB/s, and be used in the Zen-based CPUs and Vega GPUs which were subsequently released in 2017.



Implementations



  • AMD AMD64 and Direct Connect Architecture based CPUs


  • AMD chipsets
    • AMD-8000 series

    • AMD 480 series

    • AMD 580 series

    • AMD 690 series

    • AMD 700 series

    • AMD 800 series

    • AMD 900 series



  • ATI chipsets
    • ATI Radeon Xpress 200 for AMD processor

    • ATI Radeon Xpress 3200 for AMD processor



  • Broadcom (then ServerWorks) HyperTransport SystemI/O controllers
    • HT-2000

    • HT-2100



  • Cisco QuantumFlow Processors

  • ht_tunnel from OpenCores project (MPL licence)


  • IBM CPC925 and CPC945 (PowerPC 970 northbridges) chipsets


  • Loongson-3 MIPS processor


  • Nvidia nForce chipsets
    • nForce Professional MCPs (Media and Communication Processor)


    • nForce 3 series


    • nForce 4 series

    • nForce 500 series

    • nForce 600 series

    • nForce 700 series

    • nForce 900 series



  • PMC-Sierra RM9000X2 MIPS CPU


  • Power Mac G5[8]


  • Raza Thread Processors

  • SiByte MIPS CPUs from Broadcom


  • Transmeta TM8000 Efficeon CPUs


  • VIA chipsets K8 series


Frequency specifications













































HyperTransport
version
Year
Max. HT frequency
Max. link width
Max. aggregate bandwidth (GB/s)
bi-directional
16-bit unidirectional
32-bit unidirectional*
1.0
2001
800 MHz
32-bit
12.8
3.2
6.4
1.1
2002
800 MHz
32-bit
12.8
3.2
6.4
2.0
2004
1.4 GHz
32-bit
22.4
5.6
11.2
3.0
2006
2.6 GHz
32-bit
41.6
10.4
20.8
3.1
2008
3.2 GHz
32-bit
51.2
12.8
25.6

* AMD Athlon 64, Athlon 64 FX, Athlon 64 X2, Athlon X2, Athlon II, Phenom, Phenom II, Sempron, Turion series and later use one 16-bit HyperTransport link. AMD Athlon 64 FX (1207), Opteron use up to three 16-bit HyperTransport links. Common clock rates for these processor links are 800 MHz to 1 GHz (older single and multi socket systems on 754/939/940 links) and 1.6 GHz to 2.0 GHz (newer single socket systems on AM2+/AM3 links—most newer CPUs using 2.0 GHz). While HyperTransport itself is capable of 32-bit width links, that width is not currently utilized by any AMD processors. Some chipsets though do not even utilize the 16-bit width used by the processors. Those include the Nvidia nForce3 150, nForce3 Pro 150, and the ULi M1689—which use a 16-bit HyperTransport downstream link but limit the HyperTransport upstream link to 8 bits.



Name


There has been some marketing confusion between the use of HT referring to HyperTransport and the later use of HT to refer to Intel's Hyper-Threading feature on some Pentium 4-based and the newer Nehalem and Westmere-based Intel Core microprocessors. Hyper-Threading is officially known as Hyper-Threading Technology (HTT) or HT Technology. Because of this potential for confusion, the HyperTransport Consortium always uses the written-out form: "HyperTransport."



See also


  • Elastic interface bus

  • Fibre Channel

  • Front-side bus

  • Intel QuickPath Interconnect

  • List of device bandwidths

  • PCI Express

  • RapidIO

  • AGESA


References




  1. ^ "API NetWorks Accelerates Use of HyperTransport Technology With Launch of Industry's First HyperTransport Technology-to-PCI Bridge Chip" (Press release). HyperTransport Consortium. 2001-04-02. Archived from the original on 2006-10-10..mw-parser-output cite.citationfont-style:inherit.mw-parser-output .citation qquotes:"""""""'""'".mw-parser-output .citation .cs1-lock-free abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .citation .cs1-lock-subscription abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registrationcolor:#555.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration spanborder-bottom:1px dotted;cursor:help.mw-parser-output .cs1-ws-icon abackground:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center.mw-parser-output code.cs1-codecolor:inherit;background:inherit;border:inherit;padding:inherit.mw-parser-output .cs1-hidden-errordisplay:none;font-size:100%.mw-parser-output .cs1-visible-errorfont-size:100%.mw-parser-output .cs1-maintdisplay:none;color:#33aa33;margin-left:0.3em.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-formatfont-size:95%.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-leftpadding-left:0.2em.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-rightpadding-right:0.2em


  2. ^ Overview (PDF), Hyper transport, archived from the original (PDF) on 2011-07-16.


  3. ^ Emberson, David; Holden, Brian (2007-12-12). "HTX specification" (PDF): 4. Archived from the original (PDF) on 2012-03-08. Retrieved 2008-01-30.


  4. ^ Emberson, David (2008-06-25). "HTX3 specification" (PDF): 4. Archived from the original (PDF) on 2012-03-08. Retrieved 2008-08-17.


  5. ^ Holden, Brian; Meschke, Michael ‘Mike’; Abu-Lebdeh, Ziad; D’Orfani, Renato. "DUT Connector and Test Environment for HyperTransport" (PDF). Archived from the original (PDF) on 2006-09-03.


  6. ^ AMD. "AMD_Presenation_EPYC". Archived from the original on 2017-08-21. Retrieved 24 May 2017.


  7. ^ Merritt, Rick (13 December 2016). "AMD Clocks Ryzen at 3.4 GHz+". EE Times. Retrieved 17 January 2017.


  8. ^ Steve Jobs, Apple (25 June 2003). "WWDC 2003 Keynote". YouTube. Retrieved 2009-10-16.



External links



  • HyperTransport Consortium (home).


  • Technology, HyperTransport
    [permanent dead link].


  • Technical Specifications, HyperTransport, archived from the original on 2008-08-22.


  • Center of Excellence for HyperTransport, DE: Uni HD.








這個網誌中的熱門文章

How to read a connectionString WITH PROVIDER in .NET Core?

Museum of Modern and Contemporary Art of Trento and Rovereto

In R, how to develop a multiplot heatmap.2 figure showing key labels successfully