tim-frost-calnex-expert

Tim's Time
& Sync Blog

Testing synchronization in fronthaul

The high accuracy required for fronthaul networks poses challenges for testing. For example, the maximum time error produced by a class D T-BC must be less than 10ns. This means the test equipment must be significantly more accurate still.

A second issue is that accuracy requirements on each device are approaching the limits of the measurement interfaces themselves. For Class A and B devices, the standard time measurement interface is the 50Ω 1pps interface. This is specified in G.703 as having a rise time of less than 5ns. By the time the pulse has reached the end of the cable, this can have increased significantly to 10ns or more. This skew degrades the accuracy of the interface such that it is not possible to determine the original position of the signal with any great precision.

Using PTP over optical Ethernet is more accurate, in part because it is an optical signal so not subject to the same electrical skews, but also because it is bidirectional, allowing the fibre delay to be automatically calculated. Therefore PTP over optical Ethernet is a more accurate measurement interface than the 50Ω 1pps, provided any asymmetry in the fibre delays is controlled and compensated if necessary.

The first is to test each device individually, to ensure it meets the required specification. This requires a very high accuracy tester to verify the performance of Class C and Class D T-BCs, as shown below.



However, in the case of fronthaul devices, that may not be enough. Some operators are proposing to use FlexE for the node-to-node connections, with standard Ethernet just for the connections to the radio elements (the “client drops”). FlexE is a variant of Ethernet optimised to support the network slicing that is required in 5G networks. The support for SyncE and PTP over FlexE is still an ongoing discussion, so the best way to test this at present is to test two devices back-to-back:





The second part is to test the network itself. In a laboratory environment, this can be done in a similar way to testing FlexE T-BCs above. The tester can be used as the PTP Grandmaster, or alternatively (as shown below), it can be synchronized to the PRTC/T-GM device. The measurement points are then either on the last T-BC (using a PTP/SyncE connection), or if available, a 1pps output from the T-TSC. This test point is almost certainly embedded in the basestation or radio element itself, so may not be available in all equipment. The use of a 1pps signal to measure on here is acceptable because the network limit (around ±130ns) is much larger than the limit on an individual T-BC.





It’s clear that the synchronization requirements of 5G’s New Radio are significantly harder to meet than previous mobile radio generations. This will require a new generation of equipment to meet the requirements, and a new generation of test equipment to verify that the requirements have been met.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

Mobile Backhaul Over Cable

Mobile operators are moving to a small cell model, especially in dense urban areas where the large-scale macrocells can’t cope with the volume of traffic and the sheer number of connections. The problem is that all this traffic needs to be backhauled to the mobile core.

Since most small cells are in-building, the mobile operators rely on the building’s own wired connectivity to provide the backhaul capacity. One major example of building connectivity is the hybrid fibre/coaxial (HFC) networks provided by the cable operators. This provides the bandwidth and cost efficiency required by the small cell operators, but up until now, it couldn’t meet the synchronization or latency requirements.

Over the last year, a team at CableLabs, the cable industry’s technical standards body, has been working on methods to provide accurate synchronization and low latency connections over HFC networks. The first version of the synchronization specification has just been released, and is described in a blog post from the CableLabs website here.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

Synchronization methods in fronthaul

In order to meet the extremely tight synchronisation requirements of the 5G fronthaul network, the ITU-T is defining a new set of “enhanced clocks”. These clocks have around an order of magnitude better performance than the clocks defined for the 3G and 4G networks.

The new ITU-T clocks under development include the Class B PRTC (G.8272), ePRTC (G.8272.1), ePRC (G.811.1), eEEC (G.8262.1) and the Class C and Class D T-BC (G.8273.2). New network limits are being defined for these clock types, for example, the enhanced frequency network k limit in G.8261, and the enhanced time and phase network limit in G.8271.1.

A schematic representation of the ITU-T clocks is shown in the following figure. The concept is to separate the time and frequency distribution into two independent planes, using enhanced SyncE to distribute the frequency, and PTP to distribute the time and phase. The planes can be independently managed and routed, allowing more freedom to the operator to handle failure conditions and re-routing.

Blog_image


Some other organisations are also looking at fronthaul synchronization. IEEE802.1CM is a project looking at the use of time sensitive network techniques in Ethernet. The idea is to re-use the ITU-T clock types, but add extra capabilities such as time-aware shaping (defined in IEEE802.1Qbv) and pre-emption (IEEE802.1Qbu). IEEE1914.1 is also looking at defining the performance of the transport network in order to meet the fronthaul requirements.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

Synchronization Requirements for Fronthaul

For 5G networks, the basic synchronization requirements haven’t changed. For FDD networks (Frequency Division Duplexing), the carrier frequency must be within 50 ppb (parts per billion) of the allocated frequency, while for TDD (Time Division Duplexing), basestations must be synchronized to within 3 microseconds of each other, or to within 1.5 microseconds of a central time reference.

However, that’s not the whole story. There are several radio techniques being proposed for 5G that require much tighter synchronization between local RRUs. One example is inter-site carrier aggregation. This is the ability to bond two separate carrier frequencies together to form a large, high-bandwidth pipe. Carrier aggregation was used extensively by 4G operators to increase data rates to consumers, but the two carrier frequencies were transmitted from the same antenna. For 5G, that might not always be the case – the carriers may be transmitted from separate RRUs. This means the RRUs must be synchronized to within 260ns of each other.

Another example being discussed is MIMO. This uses multiple antennas, normally connected to the same RRU. However, there are proposals in 5G to allow MIMO to be used across different RRUs. This requires even tighter synchronization of less than 65ns difference between RRUs.

Clearly, these techniques require extremely high accuracy from the synchronization network. However, the distinguishing feature is that these are DU functions, therefore the tight synchronization is only required in a cluster consisting of the DU with its connected RRUs. This is shown in the diagram below:



So does that mean operators can simply provide tight synchronization in the small cluster region? Not true according to some operators. They also have to take into account failure conditions and network changes. Networks don’t stay the same for long, they change topology over time, whether through equipment failures or planned additions or re-configurations. This means it can be hard to keep track of the synchronization relationships between elements.

For example, DUs can be multi-homed, meaning they can connect to more than one CU for protection purposes. Similarly, it may not always be clear which DU an RRU is connected to. Therefore some operators have stated that they want the cluster limit to apply to the entire network. That way they are certain that whatever network changes or re-configurations occur, the synchronization requirements are always met.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

Networking the Fronthaul

The dark fibre used by CPRI connections is expensive to install, nor is it fully utilised by a CPRI connection. Some proposals were made to carry CPRI over WDM, enabling sharing of fibres between closely sited RRUs. However, the next major step in the evolution of fronthaul was to consider if the CPRI protocol could be transported across a shared network such as Ethernet.

The CPRI consortium created a new specification called eCPRI. They don’t actually specify what the “e” stands for, it could be for Ethernet (but it also works over IP), Evolved (but this was the term used to denote 4G over 3G) or Enhanced (which is my personal favourite). The eCPRI specification defines how to carry radio signals over a packet network. The underlying networks are not defined, but can include IP/UDP and/or Ethernet.

The IEEE are also in the process of creating an open standard for the same thing, designated IEEE1914. This is split into two parts, the radio over packet protocol (IEEE1914.4) and the requirements on the underlying transport network (IEEE1914.1). The main feature of both of these is that the functions of the basestation are split into three parts – the CU (Centralized Unit), the DU (Distributed Unit) and the RRU. One reason for this is that carrying a radio signal across a packet network is very inefficient, especially for 5G where the data rates are very high. The RRU connection may require a data rate of 25Gb/s or higher. The DUs can be located out in the network close the to RRUs. The connection from the CU to DU can be lower bandwidth, because carrying the raw data is more efficient than the encoded radio signal. This connection is sometimes referred to as the “middlehaul”.



A second reason is that some functions are very latency sensitive, limiting the length of the connection to a few kms, as with the original CPRI interface. If those functions are located in the DU, the less latency sensitive portions of the basestation function can be located further back in the network. This last piece enables the transition from Centralized RAN to Cloud RAN – the CUs can be located anywhere in the network, not just in a localized “baseband hotel”.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

Getting Into The Hotel Business

The protocol for the fronthaul interface was specified in the Common Public Radio Interface (CRPI), produced by an industry consortium consisting of Alcatel-Lucent, Ericsson, Huawei, NEC and Nokia. Since CPRI is carried over a fibre connection, it was realised that this fibre might be considerably longer than the height of a tower, freeing the mobile operator from having to place the baseband unit (BBU) at the antenna location. This is particularly attractive for dense urban locations where there might be several antennas within a small area.

Deployments were created consisting of several BBUs (Baseband Units) co-located in a central office known as a “baseband hotel”, and connected to RRUs (Remote Radio Units) using CPRI over dark fibre. This deployment style became known as C-RAN (Centralized Radio Access Network) – not to be confused with the Cloud RAN concept, which I will cover in the next post. It simplified the backhaul networks, because several BBUs could be co-located together and served by a common, high-bandwidth connection. It also simplified synchronization, because all these BBUs could be served by the same time and frequency reference, guaranteeing accurate synchronization.


Provided the latency of the fibre connection to the RRUs was known accurately, the baseband units could schedule transmission of the radio frames such that at each antenna (the timing reference point), the radio frames would align with those from other antennas. Synchronization then becomes more of a latency management issue rather than a distributed network synchronization problem.

The downside of the C-RAN architecture was that the fronthaul connections themselves required dark fibre. This is costly to install, and prevents sharing of fibres. Secondly, the CPRI protocols limit the maximum distance between the BBU and RRU to a few kilometres, which reduces the economies of scale provided by the baseband hotel concept. Therefore the original C-RAN concept didn’t see much take-up for LTE deployments.

Keep a lookout for my follow-up blogs, appearing weekly.

  • Networking the fronthaul.
  • Synchronization requirements for fronthaul.
  • Synchronization methods in fronthaul.
  • Testing synchronization in fronthaul.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

So what’s all this fronthaul stuff, anyway?

“Fronthaul” is one of the “in” topics at the moment in mobile networks, so I thought it would be good to explore what it all means, and especially how it affects synchronization, because we all like to keep in sync. This is the first in a series of blog posts on the topic.

So what is fronthaul, anyway? Well to understand that, we have to understand what “backhaul” is. “Backhaul” was a term originally coined by the trucking industry, and referred to a truck carrying a load from remote location back to a central distribution centre. The term then got applied in all sorts of contexts to refer to links connecting a remote site to a central site.

In mobile telecoms, it was applied to the link from the radio basestation back into the core network, “hauling” the data back from the basestation to the core. Of course, these links are bi-directional, so it also carries data from the core out to the basestation.

Where does fronthaul fit into this? Typically, the basestation sat in a cabinet, connected by a co-ax running up the tower to the antenna. Someone then had the bright idea that since the co-ax had issues with power loss, why not site the actual RF transceiver at the top of the tower by the antenna, and connect the transceiver via optical fibre to the basestation below.

fronthaul_concept

This fibre connection between the basestation and the RF transceiver became known as “fronthaul”.

Keep a lookout for my follow-up blogs, appearing weekly.

  • Getting into the hotel business.
  • Networking the fronthaul.
  • Synchronization requirements for fronthaul.
  • Synchronization methods in fronthaul.
  • Testing synchronization in fronthaul.

Tim Frost, Strategic Technology Manager, Calnex Solutions.

PTP Time Error for T-BCs

The accuracy of Telecom Boundary Clocks (T-BCs) is essential to the successful roll-out of LTE-A and TDD-LTE. To meet the new G.8273.2 compliance limits

Read full post

2020 vision

The ITU’s Study Group 15 met in Geneva again this month. Study Group 15 is the group responsible for the optical transport and access networks, including synchronisation.

Read full post

Measuring Time Error Transfer of G.8273.2 T-BCs

The companion application note, "Testing a T-BC to ITU-T G.8273.2" (previous blog) describes the methods of testing a telecom boundary clock (T-BC) to meet G.8273.2.

Read full post

Testing a T-BC to ITU-T G.8273.2

The accuracy of Telecom Boundary Clocks (T-BCs) is still essential to the successful roll-out of LTE-A and TDD-LTE. With that in mind, we recently revised our application notes on Boundary Clocks.

Read full post

TSN/A 2017 show report

The interest in Time Sensitive Networks (TSN) - and its evolution - is clearly of great interest in a number of industry sectors as was demonstrated both in the papers presented at this year’s TSN/A conference in Stuttgart, as well as the diversity in background of delegates.

Read full post

ISPCS 2017 show report

Calnex was at the 2017 ISPCS Plugfest & Symposium this year. The event started with a Plugfest where multiple vendors plugged and tested their Precision Timing Protocol (PTP or 1588) equipment. This is a valuable open session where vendors and researchers from different communities can perform real-world connections and tests, which contribute to their development and research processes.

Read full post

Partial Support for Partial Timing Support

For a long time now, operators have been asking about how to use PTP to transfer time across their existing networks. Vendors say it’s possible, but the standards are not there. At least, until now. The ITU-T have just taken a big step forward with the agreement of G.8271.2 at their June meeting. What this standard does is define the requirements on the network for PTP to be able to work accurately over existing networks.

Read full post

Fronthaul - what's going on?

Following on from the ITU meeting I recently attended, a bit more detail on the Fronthaul topic. In 4G, the “fronthaul” concept was born, separating the baseband unit from the radio unit, and connecting them using dedicated fibre. A protocol was invented, CPRI (Common Public Radio Interface) to carry what was basically the radio signal over the fibre.The radio unit simply had to modulate that signal onto the carrier, so it could be very simple and cheap.

Read full post

STAC 2017

STAC (Securities Technology Analysis Center) arrived in London after stops in Chicago and New York and as always with these events, there was as a local focus. The impending MiFID 2 directive, in particular the trade timestamping requirements inevitably played a prominent part in proceedings. In fact timing, sync and latency papers dominated the first half of the event.

Read full post

Which BMCA algorithm is in use?

The Best Master Clock Algorithm (BMCA) is run by all PTP ports in a system, and is a distributed algorithm. Initially all ports send out Announce messages advertising their capabilities, and from that each port and each clock determines which source to synchronise to. For a master-capable port, that may include itself, in which case it becomes a grandmaster.

Read full post

Stop The Clocks!

Stop the clocks! WSTS is now over for another year, so here are a few highlights from this year’s show.

One of the keynote speeches came from Han Li of China Mobile. Han is responsible for planning China Mobile’s network strategy for 5G, including how to handle synchronisation.

Read full post

Geek Glastonbury or B2B Burning Man?

What value could an SME get from attending a suited & booted funfair with almost 100,000 others?

Mobile World Congress, a forum of just about anything with a connection to the mobile network has seen a surge of interest, relevance and of participation in recent years.

Read full post

RMS or Peak-to-Peak Jitter?

Excessive jitter impacts the ability of clock recovery circuits to recover the clock properly which can lead to mistiming inside transmission equipment when data is regenerated. When timing errors becomes large, bit errors are introduced leading to excessive packet loss. Jitter is generally expressed in terms of Unit Intervals where a Unit Interval (UI) equals one bit time of a digital NRZ binary signal.

Read full post

Events in the synchronization world from 2016

For the first blog of the new year, I’m going to look back at some of the significant events in the synchronization world from 2016. The year got off to an interesting start with a reminder that we can’t always trust GPS. Back in January, the GPS system started putting out the wrong time. This wasn’t caused by jamming or spoofing, but was a configuration error caused by trying to de-commission an old satellite.

Read full post

First 5G sync recommendation

The ITU approved two new recommendations on synchronisation this weekend. G.8272.1 is the first “enhanced” clock specification aimed at meeting the requirements for 5G mobile infrastructure. The document specifies the enhanced PRTC (Primary Reference Time Clock), basically a very high accuracy GNSS timing receiver, capable of delivering time to within 30ns of UTC.

Read full post

Timing not Telecoms

I just got back from Prague after the ITSF 2016 conference. If you weren’t there, you missed another great event in a beautiful city. You can always book for next year, when the conference will be in Warsaw, Poland, of if you can’t wait that long, there is always the WSTS conference in San Jose, California in the first week of April.

Read full post

Sync University

I recently had a question on “Ask Tim” about a great sync knowledge source - a website called 'Sync University'. The questioner found it extremely useful with lots of great content but was now unable to find it anywhere.

Read full post

Quality or Traceability?

I recently had a question on “Ask Tim” about how to distinguish between a packet-based frequency signal, delivered using the PTP profile in G.8265.1, and a Synchronous Ethernet signal delivered using the Ethernet physical layer.

Read full post

1 million connections per square km with 5G

Virtual Birth? About 15 years ago, I remember reading a book on the future of telecoms. At the time, the 3G mobile system was just in development, and the ultimate 3G speed was projected to be 384 kbit/s (compared to the 56kbit/s I was getting to my house).

Read full post

What is PTP?

What is PTP? PTP stands for “Precision Timing Protocol”, and is described in IEEE Standard 1588. It is a protocol for distributing time across a packet network. It works by sending a message from a master clock to a slave clock, telling the slave clock what time it is at the master.

Read full post

High Speed Jitter

Jitter has been around for as long as the telecommunications industry has been trying to shift bits and bytes from A to B. Jitter is not cool, it’s unloved and nobody wants it and just like the bore at an office party it won’t go away. Now, just to be clear, we are talking about physical layer jitter here

Read full post

The emergence of 5G

I am just finished talking with the CEO of Calnex, Tommy Cook who had just completed a series of customer visits in Japan and China. The chat amongst the operators about 5G being just over the horizon is starting. Tommy went on to say ...

Read full post

Timestamping requirements

Quote of the day: “Clock sync is a pain, princess. Anyone who tells you differently is selling you something.” Neil Horlock, at the MiFID II Workshop on 26th May 2016 . This was a workshop on how to meet the timestamping requirements set out in MiFID II, the latest European legislation on financial markets.

Read full post

Time Sensitive Networking

There’s a buzz around the topic of Time Sensitive Networking at present. It is being linked with the “Industrial Internet of Things” (IIoT), although it is not exclusively about industrial networks. The concept began with audio and video distribution as the “Audio/Video Bridging” group of IEEE, and is now being extended to cover industrial

Read full post

What is a Clock?

Following on from my post “What is Time?”, a clock is simply a device that counts regular events from a common starting point. That applies to all clocks and calendars, with the possible exception of a sundial! The regular events might be days, months and years, or they might be pendulum swings, quartz vibrations, or atomic transitions.

Read full post

Partial Progress?

ITU's Study Group 15, the body associated with transmission and networks, met in Geneva recently. One interesting statistic I heard during that meeting was that Question 13, the synchronization sub-group, receives as many contributions as some entire study groups. Synchronization is far and away the biggest “Question” within ITU.

Read full post

Confusion Rules!

The only thing that's standard is confusion! Why does ITU say this, and IEEE say that? What on earth is MEF doing? I see Small Cell Forum are in on the act? Why can’t 3GPP sort out what they want? I saw a cartoon recently that explained neatly why we have so many different standards for the same thing ...

Show cartoon...

What is Time Error?

Your watch says you have a minute to go. You walk into the meeting room all set, and a sea of angry faces look up at you, saying “Where have you been? You’re late!” What went wrong?

The answer is your watch – it was 5 minutes out. This is called time error. It is the difference between the time reported by a clock (or watch), and the reference clock (for example, the clock on the meeting room wall).

Read full post

Prolific Profiles

IEEE1588 (2008) is a huge standard, 269 pages long. It defines the Precision Time Profile (PTP), a protocol for distributing time aver a packet network. Thing is, types of packet networks are 10 a penny these days.

There are industrial networks, power networks, telecom networks, audio networks, video networks, in-car networks just to name a few. All have subtly different requirements and therefore IEEE1588 contains loads of different options and features that simply aren’t appropriate for every network.

Read full post

Basestations need Sync

One of the biggest drivers behind the renewed interest in time and synchronization is the mobile industry. The latest generation of mobile technology requires that the basestations are not only synchronized in frequency, but in time too. This is because many of the techniques used for increasing capacity in the mobile

Read full post

Unravelling Standards

Traditionally, timing and synchronization has been slow moving, pedestrian, and not very exciting. Not much to write about! However, that has changed in recent years. I attended my first ITU-T meeting on synchronization back in 2004. We had seven attendees and eleven contributions to consider. Now, we often get 35 – 40 attendees

Read full post

What is SyncE?

When the telephone networks started to go digital in the 1960s, the voice sampling frequency was carried in the physical layer of the multiplexed digital voice signal.

This frequency was transported across the network so that all voice switches could operate at the same frequency, as any mismatch would cause clicks and pops in the voice channel.

Read full post

Interpreting ITU

The ITU has produced over a dozen standards to do with time and synchronization over the last ten years. Why so many? Partly it’s a matter of evolution, and partly it’s a matter of purpose.

The first standard released back in 2006 was G.8261, and it covered “general aspects” of frequency distribution in packet networks. Time wasn’t a topic back then. G.8261 evolved into a set of standards for frequency distribution:

Read full post

Tick Talk

Since this blog is all about time, it might be useful to try and explain what time is. So this is me trying to do what countless philosophers, theologians and physicists have been attempting to do for centuries.

Read full post

LTE-A & VoLTE rollout

GSA confirms 393 LTE networks launched, year end forecast raised, LTE-Advanced and VoLTE deployments booming April 9, 2015: 393 operators have commercially launched LTE in 138 countries. This is according to data released today by GSA (Global Mobile Suppliers Association) in the latest update of the Evolution to LTE report.

Read full post

LTE picks up speed

Long-term evolution, better known as LTE, had a blockbuster year in 2014. More than 110 commercial LTE networks went live, and global LTE subscriptions topped 250 million in the first quarter of 2014.

LTE is now commercially available in more than 107 countries, and mobile network operators added 109 LTE networks between June 2013 and June 2014. As you can see below in Figure 1, LTE is everywhere.

Read full post

  1. Note this is a technical discussion forum and no company or name information will be published.

  2. Email Address

    Please enter a valid email address.
  3. Your Question

    Invalid Input

Want to know when Tim adds new blogs? Look out for RSS feeds shown below and click to keep informed of updates to that page. Of course we think all of Tim's blogs are great so would always recommend 'All Blogs'.