A Better Way to Find and Fix Network Issues

When identifying the remedy can be as challenging as finding the cause of the network performance problem, testing before deployment is essential to avoid business disruption. There is a host of cutting edge tools now available to help IT teams monitor their networks and find the sources of any issues. But if the problem is the network itself, fixing the issue is a daunting task, given the changing conditions, flexible routes and competing applications inherent to modern complex networks. Potential fixes need to be carefully planned out if they are to be deployed on the live system, as the potential for disrupting production traffic and software performance – or even risking catastrophic failure – is high. The only way to ensure predictable performance is to test before deployment by emulating the network. Network emulators are commonly used to verify the robustness of new network products or applications. But they can also let network teams emulate the real-world conditions under which existing applications perform, and integrate various solutions to network issues through continuous development and testing. By allowing users to experiment in a safe, controllable and repeatable laboratory setting, network emulators help to mitigate risks associated with network troubleshooting, while also ensuring business continuity – meaning they are fast becoming a must-have technology. Yet not all emulators are made equal. The key to ensuring minimal disruption and optimal performance from solutions generated is to ensure your emulations are as close to the real thing as possible. Calnex SNE is a powerful network emulator perfectly suited to troubleshooting network issues. For instance, while other network emulators offer a range of different impairments, the Calnex SNE contains a comprehensive 55+ fully modifiable impairments, including standard options like delay, jitter, packet loss and bandwidth restriction, but also fragmentation, video corruption, BER corruption, reordering and many more. In addition, it provides a selection of traffic generation and routing options to allow multiple users to reproduce the unfavourable conditions in their real-life networks, as well as a host of filters to identify packets for impairment or analysis. But – alongside providing best in class port density, with 12 ports 10GbE and up to 16 ports 1GbE – the key difference is that Calnex’s solution goes one step further than competitor offerings: allowing users to simulate an entire network in a box. The only SNE to offer any-port to any-port connections, Calnex’s solution is unique in letting 4, 8 or 16 ports all communicate with each other without limitation. Furthermore, Virtual Router simulation allows users to simulate multi-hop WAN networks, and supports DHCP, static IP addresses and OSPF to simulate complex ring/star network topologies. Combined, these features make the SNE flexible and capable, but most importantly they make it the only SNE to deliver fully realistic network simulation capable of delivering reliable solutions to any network performance issue. Anand RamVP Marketing Read more about network emulation in: Financial Services, SD-WAN, Broadcast, 3G/4G Networks, Datacenter Relocation, Quick Robust Solutions for IT Network Issues

Read More »
Large USA Bank Saves a Million

We all like to put money in our banks when we can, but banks like to save too! I heard of a case recently where a Data-lab within a large US bank got network modeling/testing requests from customers that needed fast responses. Typically, they created a network topology/simulation and then sent customer application traffic across it to assess performance. The Bank used Network Emulation to exercise/stress their Cisco network infrastructure by “working around” the forwarding table, using impairments to look at the impact on network reconvergence performance. For the quickest and broadest network emulation and topology capabilities, the Bank used the Calnex SNE test tool. The Data-lab benefited from reduced test setup time in creating the correct network topologies. Average test-duration was 72 hours, and voice tests could run up to 5 weeks!. Reduced test setup time, faster customer responses with more realistic testing was seen to save the Bank at least $1M per year. Perhaps this bank will re-print their dollar bills with ‘In Emulation We Trust’? Crawford ColvilleMarketing Communications Read more about network emulation in: SD-WAN, Broadcast, 3G/4G Networks, Datacenter Relocation, Find and Fix Network Issues, Quick Robust Solutions for IT Network Issues

Read More »
SD-WAN diagram
Low Risk and High Confidence

Interesting to hear the steps a global SD-WAN vendor took to develop a verification strategy for its SD-WAN products, with Network Emulation at the core of the test strategy. First, traffic on 1G/10G links in the network topology is passed through a Network Emulator. By adding impairments on selected paths, verify whether controller functionality is as designed and per policy set. For example. degrade a path: add Delay, Packet Drop and find if controller switches traffic over to better performing link. Create latency and loss scenarios, test and determine loss, latency and jitter limits at which VoIP and Data Traffic is degraded to the point of impacting users. Also with Network Emulation, create an accurate network topology/simulation of the WAN network. To help set appropriate SLAs and then send the application traffic (using L4-7 tester) across it to assess performance. Finally, impair/break links, and accurately measure failover/convergence time with a traffic generator. In summary, Network Emulation helps to effectively and completely test SD-WAN functionality and stability prior to deploying the service. The SD-WAN service can be rolled out with little risk and high confidence. Click here for more information on Calnex Network Emulation products. Anand RamVP Sales & Marketing Read more about network emulation in: Financial Services, Broadcast, 3G/4G Networks, Datacenter Relocation, Find and Fix Network Issues, Quick Robust Solutions for IT Network Issues You may also like … New standards and profiles are defining Assisted Partial Timing Support (APTS) and Partial Timing Support (PTS). Whether you are in the planning, deployment or optimization phase of APTS and PTS, use our Field Test Plan to get you seamlessly through your journey.

Read More »
fronthaul_concept
What’s All This Fronthaul Stuff?

“Fronthaul” is one of the “in” topics at the moment in mobile networks, so I thought it would be good to explore what it all means, and especially how it affects synchronization, because we all like to keep in sync. So what is fronthaul, anyway? Well to understand that, we have to understand what “backhaul” is. “Backhaul” was a term originally coined by the trucking industry, and referred to a truck carrying a load from remote location back to a central distribution centre. The term then got applied in all sorts of contexts to refer to links connecting a remote site to a central site. In mobile telecoms, it was applied to the link from the radio basestation back into the core network, “hauling” the data back from the basestation to the core. Of course, these links are bi-directional, so it also carries data from the core out to the basestation. Where does fronthaul fit into this? Typically, the basestation sat in a cabinet, connected by a co-ax running up the tower to the antenna. Someone then had the bright idea that since the co-ax had issues with power loss, why not site the actual RF transceiver at the top of the tower by the antenna, and connect the transceiver via optical fibre to the basestation below. This fibre connection between the basestation and the RF transceiver became known as “fronthaul”. For more details check out my previous blogs covering the topics below. Getting into the hotel business. Networking the fronthaul. Synchronization requirements for fronthaul. Synchronization methods in fronthaul. Testing synchronization in fronthaul. Tim FrostStrategic Technology Manager, Calnex Solutions.

Read More »
Delay illustration
Latency in a Data Center Move

Network related delays can be changed dramatically when a data centre is moved. Geographical distance contributes significantly to the network latency between the client and server. Just moving the server physically further away from the user can double the latency. However, it won’t just slow down operations, it will also reduce the throughput of applications further impacting the end users. To put into context the relationship between network delay and an application response time is far from one-to-one.The following example provides a relatively simple overview of the relationship between network latency and application latency. For a local user with one millisecond of network delay between the client and the server, 150 transactions would be completed in 0.15 seconds. When just 50 milliseconds of network delay is introduced (representing a typical cross-country WAN connection), the performance of these transactions did not slow down by only 50 milliseconds (0.05 seconds). Instead, the same 150 transactions took 7.5 seconds. This illustrates how small changes in network latency result in major problems with application performance. In a complex project such as a data centre move, the challenge of assessing the network impact on application performance becomes one of large scale as hundreds of servers hosting thousands of applications need to be tested to understand the full impact of the data centre move. Read more about network emulation in: Financial Services, SD-WAN, Broadcast, 3G/4G Networks, Datacenter Relocation

Read More »
eCPRI and Networking the Fronthaul

The dark fibre used by CPRI connections is expensive to install, nor is it fully utilised by a CPRI connection. Some proposals were made to carry CPRI over WDM, enabling sharing of fibres between closely sited RRUs. However, the next major step in the evolution of fronthaul was to consider if the CPRI protocol could be transported across a shared network such as Ethernet. The CPRI consortium created a new specification called eCPRI. They don’t actually specify what the “e” stands for, it could be for Ethernet (but it also works over IP), Evolved (but this was the term used to denote 4G over 3G) or Enhanced (which is my personal favourite). The eCPRI specification defines how to carry radio signals over a packet network. The underlying networks are not defined, but can include IP/UDP and/or Ethernet. The IEEE are also in the process of creating an open standard for the same thing, designated IEEE1914. This is split into two parts, the radio over packet protocol (IEEE1914.4) and the requirements on the underlying transport network (IEEE1914.1). The main feature of both of these is that the functions of the basestation are split into three parts – the CU (Centralized Unit), the DU (Distributed Unit) and the RRU. One reason for this is that carrying a radio signal across a packet network is very inefficient, especially for 5G where the data rates are very high. The RRU connection may require a data rate of 25Gb/s or higher. The DUs can be located out in the network close the to RRUs. The connection from the CU to DU can be lower bandwidth, because carrying the raw data is more efficient than the encoded radio signal. This connection is sometimes referred to as the “middlehaul”. A second reason is that some functions are very latency sensitive, limiting the length of the connection to a few kms, as with the original CPRI interface. If those functions are located in the DU, the less latency sensitive portions of the basestation function can be located further back in the network. This last piece enables the transition from Centralized RAN to Cloud RAN – the CUs can be located anywhere in the network, not just in a localized “baseband hotel”. Tim FrostStrategic Technology Manager, Calnex Solutions.

Read More »
2020 vision
2020 Vision from Study Group 15

The ITU’s Study Group 15 met in Geneva again this month. Study Group 15 is the group responsible for the optical transport and access networks, including synchronisation. Once again, synchronisation was the biggest single topic, with over 80 contributions from many different companies. One of the main drivers for the whole network at the moment is 5G. Study Group 15 has been asked to look into the development of the transport network for 5G, including the backhaul and access networks as well as the synchronisation. While 5G isn’t planned to roll out until 2020 and beyond, some operators are planning field trials this year, and need equipment in place ready for those trials. 5G is really stretching the requirements on the network to the extreme. The basestations themselves are going to be split into three parts, the “central unit” (CU), the “distributed unit” (DU) and the radio unit (RU). Huge quantities of data will travel between each of these units, requiring the transport network to support 100Gb/s, with at least 25Gb/s last drops to the RUs. Synchronization between the CU,DU and RU will be extremely tight – some people are saying as tight as 130ns, depending on the radio techniques being deployed. That’s over an order of magnitude tighter than the current 4G LTE network at 1.5us. The vision is that the 5G network will be used for many applications. First on the list is enhanced mobile broadband connectivity – more data to the handset, but also opening up the possibility of using mobile connectivity to replace existing wireline broadband connections. However, there’s also a lot of buzz around “ultra-reliable and low latency connections” (URLLC), to be used for industrial systems and machine-to-machine communications. Then lastly, there’s the internet of things (IoT). 5G offers a way to connect everything to the internet, and hence to each other. This type of massive connectivity might not require the headline Gb/s to the user equipment, but it does require the ability to connect millions more devices to the network. With so many different applications making different demands on the network, the concept of network slicing has been developed. Basically this is creating virtual, independent networks out of a single 5G network infrastructure. It’s not totally clear how this will be achieved yet – there are several competing technologies – but it will be interesting to see how this develops over the next months and years. Watch this space to see how the 2020 vision will be brought into focus. Tim FrostStrategic Technology Manager, Calnex Solutions.

Read More »
Emulation to Test 3G/4G Networks

If someone owns a laptop, smartphone or tablet you can pretty much guarantee that they will also be a user of an application or platform which, in order to operate, needs a network connection to transfer data. Users utilise applications and platforms for both personal and business-related tasks such as online shopping or real time communication and many require a vast amount of bandwidth to be able to work effectively. If a user is mobile they will seek high speed data services such as 3G and 4G to be able to operate their application and when connected they expect these networks and applications to perform! According to Gartner smartphone sales will reach more than “4.5 billion by 2020, putting smartphones in the hands of more than half the world’s population.” To meet the needs of these billions of users, the demand for 3G/4G and eventually 5G networks will only increase, as will the amount of data being transferred across them and this can and will affect their performance. Due to the varying factors that can impact the performance of 3G/4G services it is critical that mobile applications and platforms are robustly tested before deployment to ensure the end user experience (UX) is exactly as intended. 3G/4G Fourth generation mobile technology (4G) superseded its predecessor 3G by providing mobile ultra-broadband (gigabit speed) access for all-Internet Protocol (IP) based communications such as IP telephony. It provides faster upload/download speeds, browsing speeds and reduced latency and is the optimal mobile connection for demanding services such as playing games and streaming videos. How does it work? Mobile communication network operators deploy thousands of cells which are served by ‘radio base stations.’ Devices and base stations communicate by exchanging radio signals, when a device discovers the nearest base station it connects to it. A base station comprises of many different features such as; tower, transceivers and antennas. The antennas transmit and receive voice and data signals to and from the device. Both 3G and 4G networks are IP-based to enable the sending and receiving of data in the form of packets. The importance of UX across 3G and 4G networks UX is always a critical factor for applications used across many sectors including telecoms, finance, broadcast, retail and manufacturing. However, one use case where the UX and 3G/4G connection performance is crucial is in Healthcare. 3G/4G Networks in Healthcare The use of 3G/4G mobile networks have enabled the healthcare industry to transform from the traditional approach of doctors and nurses caring for patients in hospital to remote care. Today they can benefit from medical applications with accurate diagnostic quality from large amounts of data anytime and anywhere. Devices such as fitness trackers used in conjunction with smartphone applications can provide information such as blood sugar levels and heart rates, remotely alerting medical professionals to potential problems which enables them to take fast, appropriate and often preventative action. In the case of emergencies, applications such as ‘life saving tool’ helps users locate services such as the nearest defibrillator in the case of a cardiac arrest. Using these can increase a victim’s survival rate by enabling a user to take fast action and begin administering a defibrillator and CPR before medical services arrive, but only if the connection enables the app to perform as intended. There is an extensive range of smartphone apps which are life dependent and, in these situations, time is precious and performance is critical, thus highlighting the demand for them to work effectively across 3G/4G networks. The future and 5G While remote and mobile healthcare pledges to create efficiencies for the industry and transform the patient experience, with even higher speeds of 5G expected in 2020, the pressure on ensuring applications work across the wireless networks will increase, requiring continuous network performance testing in order to be a success. What can be tested? Quality of Service and UX in mobile networks can be affected by many network characteristics like latency, packet corruption and bandwidth throttle. Using Calnex SNE you can create a 3G or 4G network adding these real world impairments and test out how applications will perform under a specific set of conditions. Understanding how an application or platform will perform prior to it being deployed gives developers and managers insights to address any issues and ensure that they don’t impede live performance. Using our flexible UI you can visually build your simulation, using the drag and drop functionality to select impairments from the extensive list to create a 3G or 4G network map. Equipment manufacturers and mobile operators can rely on the Calnex SNE product range to cover their entire wireless testing needs. Anand RamVP Marketing. Read more about network emulation in: Financial Services, SD-WAN, Broadcast, Datacenter Relocation, Find and Fix Network Issues, Quick Robust Solutions for IT Network Issues

Read More »
What is 1588 PTP?

PTP stands for “Precision Timing Protocol”, and is described in IEEE Standard 1588. It is a protocol for distributing time across a packet network. It works by sending a message from a master clock to a slave clock, telling the slave clock what time it is at the master.

Read More »
What is a PTP clock?

Following on from my post “What is Time?”, a clock is simply a device that counts regular events from a common starting point. That applies to all clocks and calendars, with the possible exception of a sundial! The regular events might be days, months and years, or they might be pendulum swings, quartz vibrations, or atomic transitions. For example, my watch counts the resonant vibrations of a quartz crystal. I manually set the time to a known starting point, usually by comparing to a clock that I assume has a more accurate version of the time in my time-zone, for example my cell phone. In turn, my cell phone gets its time from the cellular network, which gets its time from a time server, which gets its time from a national time server, which gets its time from UTC. In that way, there is a loose traceability of my watch right back to UTC time, even if it is somewhat inaccurate when it reaches my watch. The crystal frequency of my watch doesn’t exactly match UTC frequency, so the time gradually deviates from the correct value, and I periodically adjust my watch to keep it reading close to the correct time. A PTP clock is no different. A PTP slave clock receives messages from a PTP master clock several times a second, and adjusts its time to match the incoming messages. In between these messages, it relies on a counting “ticks” of a local oscillator – typically quartz crystal vibrations – to keep the time advancing, just like my watch also relies on its internal crystal. PTP boundary clocks work on the same principle, they receive time messages from a grandmaster clock, adjust their own time, and then pass that time along to subsequent boundary clocks or slave clocks. Clearly, at the end of the chain, the time is not as accurate as the start of the chain. Each clock introduces a certain amount of inaccuracy. For example, these errors might be caused by inaccuracy in the time adjustments, or by instability in the crystal, causing the time to deviate between adjustments. The performance of a clock can be defined in terms of four key parameters: noise generation, noise tolerance, noise transfer, and transient response or holdover. Noise generation is the amount of time error a clock introduces, measured with an “ideal” timing signal at the input. Noise tolerance is the amount of time error a clock can tolerate at its input and still function correctly. Noise transfer is how much noise a clock passes from its input to its output, or in other words, how much does the clock filter a noisy incoming signal. Holdover (sometimes called “long-term transient response” defines how long a clock will maintain accurate time if the input timing signal is lost. In subsequent posts will look in more detail at each of these parameters. Tim FrostStrategic Technology Manager, Calnex Solutions.

Read More »