Kyle Beckmeyer, Product Marketing Manager, Timing Products, Silicon Labs
The rapid proliferation of streaming video, IoT, social media, cloud-based enterprise software, and the upcoming adoption of 5G wireless arecollectively driving the need for higher bandwidth data centers optimized to run a complex multitude of different tasks and applications. The rollout of new software and service offerings has traditionally depended on new hardware being deployed in data centers. Until recently, new software and service offerings had to be aligned with the introduction of new servers, storage and switches, often on a two-year refresh cycle.
The rate of new services being introduced for cloud computing, software as a service (SaaS) and web services is now outpacing this fixed hardware upgrade cycle, presenting challenges for data center operators and web services companies.
To meet present and future demand, service providers, data center operators and web services companies are rapidly transitioning to a software-defined networking (SDN) model that abstracts software and services away from the underlying computing, switching and storage hardware. Service providers and data center operators are adopting new hardware technology that supports the industry transition to SDN while simultaneously increasing the speed and bandwidth between and within data centers. Servers, storage systems, spine/leaf switches, aggregation routers and optical transponders are all going through a seismic shift in technology, adopting new 100G/200G/400G optical transmission technologies, higherspeed PCIe Gen 4 and cache coherent interconnect for accelerators (CCIX) data buses, NVM Express (NVMe)-based solid-state storage, specialized processing technology optimized for machine learning and artificial intelligence, and new memory technologies to meet the ever-increasing demand for higher bandwidth networks.
A common thread throughout this data center hardware bandwidth upgrade is that reference clock timing requirements are growing more stringent. Now more than ever, system architects must pay much closer attention to timing and clock tree design during hardware design.
Data centers are connected to each other and the underlying core and aggregation telecom network through high-speed optical fiber connections. Coherent optics is the latest technology of choice being implemented in data center aggregation switches and optical transponders, providing the ability to transfer an increased amount of information across a fiber optic cable at speeds of 100G today and up to 600G in the near future. At a high level, coherent optics technology combines advanced high-speed digital signal processing and high-speed data converters to modulate both the amplitude and phase of the light being transmitted between each transmitter and receiver, enabling more data to be sent over existing fiber networks.
The data converters in both the transmitter and receiver require very low-jitter, high-frequency reference clocks often in excess of 1.7 GHz. In addition, reference timing is needed to support digital signal processing. Initial 100G coherent optical line card and module designs have used multiple timing ICs and oscillators to satisfy these timing requirements, requiring a significant amount of board space and cost. Silicon Lab’s Si5342H and Si5344H coherent optical clocks are optimized single-chip timing solutions for coherent optics, consolidating all reference clocks in a solution achieving ultra-low jitter performance of<100fs.
Coherent optics is the latest technology of choice being implemented in data center aggregation switches and optical transponders, providingthe ability to transfer an increased amount of information across a fiberoptic cable at speeds of 100G.
Spine and leaf switches create a network of connections between racks of servers and storage equipment, evenly distributing traffic throughout the data center. Leaf switches sit atop each rack, providing downstream connections to the servers and upstream connections to each of the spine switches in the network.
Next-generation spine and leaf switch designs are adopting switch SoCs that include both 28G and 56G serializers/deserializers (SerDes) to support downstream port bandwidth migration from 10GbE to25/40GbE and upstream port migration to 100GbE. These increased speeds require significant advances in reference clock jitter performance, with maximum specifications as low as 150 fs rms across a 12 kHz-20 MHz mask for the 56G SerDes. Additional system clocks are also required for FPGAs, CPUs, memory, CPLDs and board management controllers (BMC) used in these designs. Silicon Labs’ Si5341 any frequency clock generator and Si5345 any-frequency jitter attenuating clock meet the ultra-low jitter performance requirements of these applications, while providing up to 10 unique frequency outputs in a single-chip timing solution, making them an ideal choice for synchronous or asynchronous leaf and spine switch designs incorporating 28G and 56G SerDes in 100GbE designs.
The majority of server and storage processors in today’s data centers are based on the Intel x86architecture. Increasingly, new products are being introduced based on IBM Power and ARM architectures. The Power and ARM based platforms generally require additional clocks for the processors and other I/O functions compared to x86 platforms. Regardless of CPU preference, each architecture and platform uses high-speed data buses to transfer data between the CPUs, memory, storage devices and add-in cards.
PCI-Express (PCIe) is the dominant data bus used in servers because of its low cost of implementation, high bandwidth, and availability in most CPUs, FPGAs, SoCs and ASICs. The PCI Special Interest Group (PCI-SIG) recently introduced its fourth-generation PCIe specification, which increases the data rate from 8 gigabits per second (Gb/s) to 16 Gb/s.
In addition to being used on server motherboards, PCIe is becoming widely adopted in data center storage applications as solid state drives (SSDs) become favored over hard disk media. The expanded use of the PCIe data bus in server and storage applications is driving the need for more, higher precision PCIe reference clocks throughout the entire rack, from the CPU on the server motherboard all the waydown to each SSD. Solid state storage uses the NVMe protocol as opposed to SAS or SATA serial protocols used in legacy hard disk storage designs. An NVMe-based SSD connects to a storage system over a standard PCIe connector, which means PCIe reference clocks are required for all NVMe based SSDs. It is also common for flash array storage systems to use FPGAs and customized controller ASICs to manage the traffic between the servers and SSDs, each of which needs its own high-performance reference clocks.
While hard disk storage is expected to be the dominant data center storage media for the next several years, flash array storage system deployment is growing rapidly. Industry analysts are anticipating a steep ramp in flash array storage adoption in the 2018-2020 timeframe, primarily driven by web service data centers.
Silicon Labs recently introduced two new families of low jitter clock generators specifically addressing clock tree requirements needed in x86, Power, ARM and flash array storage systems. The Si5332 Any-Frequency clock generator family is capable of providing up to 12 clock outputs for PCIe end points, FPGAs, processors, and other SoC/ASIC devices in flash array storage systems with jitter performance ofless than 230 fs rms. With the capability of generating fractional and integer related clock frequencies, spread spectrum-modulated clocks, low power consumption and frequency control, the Si5332 device integrates all clocks needed in a storage system design into a single IC, saving printed circuit board (PCB) area and system cost. For system designs and add-in cards that only require a PCIe clock source, the Si522xx family of PCIe Gen 4 clock generators provides between 2 and 12 clock outputs of 100 MHzwith 241 fs rms typical phase jitter performance, spread spectrum modulation for EMI reduction and hardware output enable pins.
The design cycle for new data center equipment is typically two years. To accommodate new software and web services product launches on a faster schedule, data center architects have started developing specialized processor add-in cards that can provide additional CPU processing power, or alternative types of processing power that are optimized for certain applications such as web search, artificial intelligence or machine learning. Add-in cards are designed to plug into a standard server motherboard over a PCIe connector, immediately providing expanded capabilities to an existing server. The design cycles for add-in cards can be as short as six months, giving operators and web companies added capabilities within a data center without re-architecting re-outfitting an entire data center with new servers.
Many types of add-in cards have been deployed in data center servers over the past few years using FPGAs, graphics processing units (GPUs), and customized ASICs. This trend is expected to accelerateas new GPU, FPGA and SoC products come to market that are optimized for specific applications. The primary interconnect between the server motherboard and the processing device on the add-in card isPCIe, although new alternative protocols are starting to be adopted. PCIe, CCIX, NV Link, Open CAPI and GenZ are enabling faster data transfer between CPUs, memory and accelerator cards, achieving datarates of 16-32 Gbps. Given these data rates, the reference clocks must be incredibly precise to ensure robust signal integrity and minimize bit error loss. Silicon Labs’ Si5332 any-frequency clock generators provide ideal single-chip timing solutions for accelerator card applications, offering the high-performance clock frequencies needed for FPGAs, GPUs and/or customized ASICs as well as reference clocks for the data bus used, all with excellent jitter performance.
Data centers are increasingly important in many aspects of our lives, enabling information storage on avast service as well as cloud services and emerging artificial intelligence systems. To continue supporting the rapid pace of new innovations and applications being run in the cloud, architects and hardware designers must continue to expand bandwidth within the servers, storage equipment and switching networks within data centers. The migration to 100GbE in data center interconnect and leaf/spine switches, PCIe Gen4 in servers and add-in cards, and NVME in solid state storage all exemplify new technologies being adopted to address the need for higher bandwidth. To ensure the maximum potential of these technologies is reached, system designers must put greater importance on clock tree design and use ultra-low jitter reference clocks throughout the data center.
Kyle Beckmeyer, Product Marketing Manager, Timing Products, Silicon Labs
Kyle Beckmeyer serves as product marketing manager for Silicon Labs’ timing products, responsible for managing product strategy, new production definition, and business development in the data center, communications, and industrial markets. Mr. Beckmeyer joined Silicon Labs in 2013, bringing 8 years oftiming experience and market knowledge. Previously, he worked in the timing divisions at Integrated Device Technology (IDT) as well as Integrated Circuit Systems (ICS). Mr. Beckmeyer holds a Bachelor of Science degree in electrical engineering from the University of California, Davis and a master’s degree in business administration from Santa Clara University.