The world’s second-largest telecom operator, Vodafone, facing taxation issues in India, on Thursday said it was difficult for foreign companies to do business in the country because of slower government clearances. Vodafone India Chief Executive & Managing Director Marten Pieters said the firm had in December last year sought government’s approval to bring in funds from the parent company for buying airwaves but the clearance was still awaited. “Yes, it is difficult to do business in India. That is the perception of foreign companies in general, not only telecom ones,” said, Pieters. Source

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

Last year I blogged about Asia’s final frontier in telecom. This week I had the privilege of visiting Yangon and meeting some of the movers and shakers of the telecom industry in Myanmar. Since last year, the ambitious and recently reformist ASEAN republic has licensed two international operators, Telenor of Norway and Ooredoo of Qatar, to build out mobile networks, as well as entered into agreements with other investors to put money and know-how into developing the industry. Myanmar is the 73rd country I have worked in, and from what I saw this week, it promises to be one of the most illuminating. The country’s senior policymakers, as a group, are quietly shaping the emergence of an economy which was largely forgotten by the rest of the world until a decade ago, evidently with a sharp appreciation for what needs to be done to keep growth on track. What is Myanmar doing right in telecom, and what can we learn in India from this? Source

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

India’s telecom industry is at the cusp of a new beginning. After hitting many lows in the past, the sector is readying itself for better times. A new government is in place, investments are poured in from companies like Reliance and most importantly, international collaborations are being encouraged. Developments like India and the UK going for collaboration route in the field of next generation telecommunication; new telecom minister’s assurance to set things right and continue focus on quality and domestic manufacturing; and RIL’s plan to invest Rs 30,000 crore, will build confidence amongst the investor’s fraternity. Source

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

Mobile advertising in India is growing at a record pace largely fuelled by uptake of smartphones. India is the single-most powerful driver of the Asia-Pacific market with mobile ad impression volume growing 260 per cent since July 2013, according to the State of Mobile Advertising report released by Opera Mediaworks. There is a country-wide shift from feature phones to smart devices, which is dominated almost entirely by the Android platform (41.7 per cent share compared with 0.4% for iOS). Opera Mediaworks operates brand-focused mobile advertising network, serving 24 of the 25 top global brands. It also delivers the mobile advertising server and monetisation tools to 18 of the top 25 media companies worldwide. Source

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

Broadband subscribers grew by a mere 2.87 per cent in the month of July to 70.81 million in the country. The Telecom Regulatory Authority of India (TRAI) said the number of subscribers at the end of June were 68.83 million. For the second month in a row, the largest growth was seen in mobile device users (Phones + Dongles) with a growth of 3.56 per cent. The change in wired subscribers was a mere 0.45 per cent and the growth in fixed wireless (Wi-Fi, Wi-Max, Point-to-Point Radio & VSAT) was 1.54 per cent. The top five broadband service providers constitute 85.12 per cent market share of total broadband subscribers at the end of July. They are BSNL (18.14 million), Bharti (15.61 million), Vodafone (11.23 million), Idea Cellular (9.06 million) and Reliance Communications Group (6.23 million).

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

The Cabinet on Wednesday approved a blueprint for the Digital India programme, which envisages all government services be delivered electronically by 2018. It also seeks to provide unique identities to all citizens. The programme aims to “bring public accountability through mandated delivery of government services electronically” and provide a “unique ID and e-Pramaan, based on authentic and standards-based interoperable and integrated government applications and data bases”. Digital India would provide “high-speed internet as a core utility” down to the Gram Panchayat level and a “cradle-to-grave digital identity — unique, lifelong, online and authenticable”, said an official statement, adding the unique IDs would facilitate identification, authentication and delivery of benefits. Source

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

By VinayRathore, Infinera Director, Product Marketing. I was recently visiting Hong Kong, and as I was going through an industrial district in Kowloon I noticed the construction of a few enormousnon-descript buildings, with very few windows. It got me wondering was this some sort of penitentiary or secretive government building? Then I noticed that there was not enough barbed wire to be a jail nor any visual indication to be a secretive government facility. Fortunately, a quick search on cell phone GPS maps advised that it was just another large data center. While there has been a strong trend toward creating bigger and more powerful centralized data centers to meet end user needs, there has been another, less notable, trend toward pushing the data center closer to end users for the same reason. Large data centerscreateefficiency through a centralization ofresources (e.g., uninterruptable power supplies, generators, HVAC, other networks,etc.). The goal is to minimize cost while maintaining maximumfunctionality and reliability of the data center itself. However, many end users find that they prefer certain mission critical elements of their IT infrastructure to be physically closer to their offices, rather than in a central data center that could be far away. This has given rise to the need for a data center solution physically closer to the customer (aka virtualized data center), which essentially comprisesa network extension of the centralized data center into space that is physically closer to the end user, while still offering many of the traditional data center services.For some locations, such as a remote business park, building a full service, but smaller scale data center may be justified; for others a virtual data center is the next best option. Why large centralizeddata centersare not for everyone Today, enterprisesare more dependent upon their IT infrastructure than ever before. Further, they no longer want the burden of managing IT complexity, a function typically far from their core competency. Instead, they simply prefer to move IT resources into a location where space, power and network accessisabundant and can be managed remotely. For someenterprises, this model creates a dilemma. The question that arises is,“How should I treat my mission critical applications in a data center that is 80 kilometers away, is shared with others,and in some casesis not easily accessible (physically, that is)?” Some business applications don’t care about such parameters, like certain Human Resource functions, or Customer/Marketing applications, but other applications, such as proprietary algorithmic trading, 3D modelling applications, or high volumetransaction-oriented processes, may suffer due to latency and need for greater control. The alternative is to keep an IT facility onsite, but ideally there is a preference for someone else to manage it. Why most largeenterprise will need virtualized data centers? As more enterprise users move toward cloud-based applications, performance and speed of innovation (aka speed of change)become important. This has driven the desire to have some portion of infrastructure be located in a high performance, controlled IT facility,while other portions can be operated over a public infrastructure, creating a hybrid cloud network. This desire has created a new Cloud concept known as the hybrid cloud. Hybrid cloud is defined as having some portion of cloud infrastructure operating in a public cloud facility, and another portion operating in aprivate facility, usually to meet specific end user demands, such as security, location, accessibility and reliability. Many large enterprises, including financial and retail enterprises, can build their own private cloud with private infrastructure they build themselves, or outsource it to a data center operator/integrator, whocould own and maintain the space and connect it back to the main data center, where a rich set of other cloud services may be available. They could also supplement any public cloud offering with such a private cloud offer, thus offering a full hybrid cloud solution. What is a Virtual Data Center? The concept of the virtual data center is about both revenue and opportunity. A virtual data center is simply an extension of a larger centralized data center that offers similar services, with the added benefit of catering to more specific enterprise needs. By addressing large enterprises, perhaps through a customized virtual data center solution, data center operators can attract larger, more profitable end users while establishing a footprint in a new market segment. In fact, many Fortune 500 companies have already engaged in such strategies by building their own private networks using leased space and contracting companies to manage the network. This concept of the virtual data center has been shown to be popular in large metropolitan locations, where customers are spread out across larger distances. Overcoming the network challenge The key challenge is how to extend the network from the central data center to a virtual location efficiently. In fact, there are several ways to overcome this network challenge, including technology, operational simplicity and ultra-high performance. All of the key operational challenges concern space utilization, power consumption and the need for a high performance network. In this case, two key technological innovations enable such solutions to become reality:photonic integration and optical super-channels. Photonic Integration The value of photonic integration lies inreplacing multiple discrete components with a single highly integrated optical integrated circuit, also known as the photonic integrated circuit(PIC). A key value of the PICis that it reduces space and power while ultimately providing higher performance network capacity. It follows the same philosophy of integrated circuits in our laptops replacing discrete transistors that were much larger and consumed much more power. Ultimately the real value of photonic integration is that it takes the highest power consuming components (e.g., network side lasers) and integrates them into a small compact device that consumes significantly less power and space. Optical super-channels Optical super-channels are defined as a group of smaller, more granular optical channelsthat are bundled into a single, larger optical group that provides equivalent high performance, but also adds the simplicity of managing fewer circuits. For example, would you rather manage 50 x 10G fiber circuits or manage 5 x 100G fiber circuits? If we agree that PIC based optical super-channels are the simplest and most cost effective way of deploying network capacity,the next question is one of reliability. Fortunately, PIC technology is so reliable that it features an expected Failure in Time (FIT) of more than 1 billion hours (Source: Infinera). This solution results in increased reliability with photonic integration and simplification with super-channels that deliver performance. Networks that maximize the Return on Investment (ROI) Leveraging both Photonic Integration and Optical super-channels will not only help drive greater network efficiency and operational efficiency, but also provide opportunities to increase ROI. However, 500Gb/s is a lot of capacity, buta particular customer’s forecasted needmay be only about 100-200Gb/s. To solve this dilemma, thesuperchannelcan be sliced whenever needed through software activation. Namely, since optical super-channels and photonic integration are tightly coupled, customers can simply activate the bandwidth they need, in 100Gb/s bandwidth increments, using a few simple keystrokes, similar to activating software on your computer. Conclusion: As the data center market continues to evolve, largecentralized data centersand smallervirtualdata centers that are closer to the end users will co-exist. The concept of the hybrid cloud plays a role in that it addresses the need for large enterprisesto keep certain mission critical resources close to them while locating other assets in large, more economical andcentralized facilities. This solution also creates opportunity for data center operators to offer value-added services, from the basic virtual storage and computing services to the fully outsourced IT solutionsthat make the operators more indispensable to the enterprise. The critical element in this solution remains the network, which must be simple and efficient. From the technology angle, technologies such as photonic integration and optical super-channelsmay be critical to ensuring the deployment of simple, efficient and high performance virtual data center solutions. Vinay Rathore Senior Director Solutions Marketing, Infinera. Mr. Rathore brings over 20 years of telecom experience across a broad array of technology. He has helped some of the world’s largest operators and suppliers including Sprint, Global One, MCI, Alcatel and Ciena build & market their newest solutions. His areas of expertise include network engineering, operations, sales & marketing in both wire-line and wireless systems as well as leading edge network solutions spanning Layer 0 to Layer 3. Mr. Rathore holds a degree in Electrical Engineering from Virginia Polytechnic Institute as well as an MBA from the University of Texas.

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

The move to intelligent transport networking makes it possible to virtualize optical bandwidth to create the scalable and agile Wide Area Network needed to support Cloud services says Andrew Bond-Webster, Vice President, APAC, Infinera. The world now looks to the APAC region as a dynamic and fertile business environment, accelerated by rapid uptake of the most advanced IT solutions, including cloud computing. And yet APAC cloud providers face a particular challenge, according to CEF (CloudEthernet Forum) President, James Walker, who is also VP, Managed Networks Services for Tata Communications. In the latter role Walker has a lot of experience in meeting global cloud requirements, such as linking datacentres in America, Europe and Asia. Typically these datacentres will be close to each other in America and in Europe, but the APAC ones may be scattered anywhere between Delhi, Hong Kong, Singapore, Tokyo… even Sydney. “In other words, they are widely separated” says Walker,” and this has a very broad impact on how people design the network and datacentre environments – which then has a knock-on effect across the entire globe”. He goes on to say; “It’s not so vital to manage large capacity between sites that are quite close together. But, if you’re carrying capacity internationally, it becomes absolutely critical. So, I think the Asia Pac region has got a head start looking at these problems”. So, how can the region’s cloud providers support reliable, high performance networking over such distances – in particular across submarine cables that are relatively inaccessible and in a harsh environment? Advancing the WAN An important starting point is to simplify the complex hardware requirements of optical to digital conversion, switching and reconversion, that used to demand racks of equipment needing constant manual maintenance. This has been achieved by the industry’s rapid uptake since 2005 of PICs (Photonic Integrated Circuits). Advances in PIC technology now mean that a single line card, launched in mid 2012, can deliver a ten carrier, 500Gbps super channel, incorporating over 600 optical functions in a single chip. The advantage of one super channel over the equivalent array of parallel channels is that it virtualises them into one massive broadband “pool” that can be dynamically re-divided into any number of virtual channels to provide rapid scaling and re-configuration in the network,mappingany service, from 1 GbE up to 100 GbE, into those bandwidth pools. This level of agility and network virtualisation requires Software Defined Networking (SDN), in which a centralized SDN controller, with a view of the entire network, can make rapid path setup decisions based on the needs of applications running on top of the controller. Applications could then simply request a path across the network, and the SDN controller would provide an optimal path (in terms of cost, performance and application needs) at the right layer with full automation. SDN is already making inroads in the datacentre, with simplification and streamlining of networks at the IP and Ethernet layers, but when it comes to linking datacenters across the WAN there are significant challenges. For a start, existing transport networks represented a massive capital and operational investment built on analogue fiber-optic technology.Thanks to PICs’ integration of OTN switching and WDM interfaces and the creation of super-channels, however, the industry is rapidly evolving towards a next-generation converged optical transport layer more suitable for SDN management. As Walker explains: “SDN within the WAN environment can provide adaptability, it can provide a better utilisation of assets between datacentres a long way apart. That’s very valuable to operators, and something that gets a particular focus in the Asia Pacific region because of the distances we’re having to cover”. He adds a caution, however: “SDN exists within the datacentre and it exists within the WAN environment. Today those two domains might not talk directly to each other: so the datacenter SDN may well be the enterprise’s SDN and the SDN in the carrier environment is the carrier’s.There isn’t yet a defined set of standards of how those two might talk to each other”. That, of course, is one key aim of the CEF: to help define such standards fast enough to keep pace with cloud demands. An important role for “Carrier SDN” will be to manage the converged optical transport layer provided by super-channels, to provide optimal support for multi-domain, multi-vendor and multi-layer networks.Because the super-channel presents available bandwidth as a single large resource, the controller has no need carry detailed data on the complexities of the underlying fiber transport and switching. The controller is not required to micro-manage the details of optical technology, it is simply mapping services onto massive available bandwidth without significant degradation of its own performance. The benefits of Carrier SDN The benefits of this programmable, automated and open approach include: • Scalability for fast deployment of optical bandwidth to support the bandwidth demands incurred by the router layer or by direct inter-datacentre communications. • Efficient use of resources by optimizing network paths for the application andorchestrating multi-layer protection to minimize over-provisioning. • Lower operating expense by automating the network across layers – eliminating device-by-device configuration at the router layer and coordinating with the transport layer. • Fastertime-to-revenue by allowing router layer configuration to be simulated in software and fine-tuned in advance of being finalized and rolled out. According to analyst Tim Dillon from tech Research Asia, SDN is “absolutely fundamental” to addressing key issues around traffic prioritisation and application performance in the WAN. As well as the demands of cloud computing, he points out that BYOD is a growing challenge across the region – including Australia New Zealand, China and India – all markets that will welcome SDN in the WAN. Hugh Ujhazy, Principal Analyst at Current Analysis Asia agrees, pointing out that APAC is a very diverse region with a multiplicity of regulations and developmental levels. “Cloud is a huge opportunity for everybody, beginning with the mature markets of Australia, Hong Kong, Singapore etc with extensive on-premise environments. For them, SDN and similar technologies are going to be a way of managing their costs, of improving their utilisation and making the migration step from on-premise to cloud”. The way forward Bearing in mind carrier’s existing infrastructure, massive investment in legacy optical technology and the fact problems of upgrading complex systems that are an on-going revenue source – how fast can we expect to see Carrier SDN catching on across the region? In June 2013 Infineragave a first demonstrationof Carrier SDNat Nissho Labs NETFrontier Center in Tokyo.The demonstration featured a prototype Open Transport Switch running on Infinera PICs and working in conjunction with an external SDN controller and various network applications. The controller provisioned bandwidth services on demand across a network of PIC nodes using OpenFlow protocol. The demonstration leveraged service-ready super-channel capacity, flexible bandwidth virtualization, and a standards-based control plane to demonstrate the functionality and potential for Carrier SDN. The response to this demonstration has been very strong. Interest was already there because of the technology’s ability to deliver high speed, high capacity super channels over very long distances suiting both terrestrial and subsea networks. Recent deployments of the underlying PIC super-channel approach have included such big names as Telstra Global’s network of multiple ultra-long haul submarine cable routes, to optimise capacity and provide greater scalability and reliability;and Australia Japan Cable’s submarine network.Pacnet’s new optical network infrastructure based on 500Gbps super-channels took only nine weeks to deploy and its customers can now survive multiple fibre cuts with less than 50ms recovery – a vital assurance in subsea networks. Across the region carrier and service providers are turning into cloud operators and are already adopting 100Gbps PICs. Now all eyes are on the potential of this technology and the Carrier SDN future, in which the service provider will be able to automate bandwidth delivery in real-time in response to customer requests, without the operating costs of manual processes and workflows. Soon carriers will be able to develop new innovative services, free from the constraints of proprietary network operating systems and traditional management systems. What’s more, Carrier SDN will enable them to bring these services to market in record time.

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

Interfacing analogue and digital systems is a bit like mixing oil and water. But the latest integrated optical/digital chips pack all that hard work into a tiny fingertip-sized component – condensing hundreds of optical components into a single ultra-reliable microchip. Digital switch functionality plus massively scalable capacity enables a game-changing Intelligent Transport Network. Optical networking has no equal for long distant point-to-point transport in terms of its reliability, security, speed and, above all, potential capacity. The challenge is to find ways to exploit that speed and capacity in a way that serves the complex routing needs of a data network. According to Infonetics Research June 2013 survey, 86% of respondents are planning to use Optical Transport Network switching in the core, and by 2016 94% of those want the switching integrated with WDM interfaces. Routing light signals on the fly would require switching decisions made faster than the speed of light – and that is assumed to be impossible. So complex switching and routing functions rely on optical to electronic signal conversion so the data can be read and processed before conversion back to optical signals for re-transmission along the intended path.This is where the delays and inefficiencies originate in what might otherwise be a near perfect networking medium. The key to progress is to increase the efficiency of the switching process, whereconventional optical components demand a lot of equipment to manage both the conversions and the routing. In a large carrier network this means racks of equipment burning a lot of electricity, taking up space and requiring costly and time-consuming manual maintenance. The PIC solution If it were possible to pack all those functions into onesmallcomponent, it would result in a step change in efficiency, density, flexibility and lower OpEx. This was first achieved in 2005 with the launch of the first Photonic Integrated Circuit (PIC), delivering ten 10Gbps DWDM channels between a pair of PICs. Within eighteen months of the first shipments, the solution had taken the number one market position in North America’s 10G market – a “real world” confirmation of the superiority of this approach. Development of this technology has its precursor in the evolution of microprocessors and application-specific integrated circuits(ASICs), so that rapid progress can be made using already-proven technology. Already there are 500G DTN-X PICs, and Terabit and beyond models are on the cards. While the original version used the relatively simple IM-DD (Intensity Modulation with Direct Detection) in each channel, the subsequent explosion in bandwidth demand means that a lot of work has since been done to develop phase-based modulation with coherent detection as used widely in radio transmission. This approach is less vulnerable to signal degradation over long distances and, at 100Gbps per channel, provides a ten-fold increase in capacity but at the cost of a twenty-fold increase in the number of processing components. Such added complexity would be prohibitive, were it not for the component density and efficiency made possible in today’s integrated circuits. The other important development is to be able to integrate a number of separate optical channels to perform as one large “super-channel” and allow much greater flexibility of bandwidth delivery. This can be likened to the difference between a number of parallel single-lane highways all travelling at the same speed versus the flexibility of a multi-lane motorway: rather than ten 10Gbps Ethernet circuits each being allocated a distinct 10G DWDM carrier, the operator has a 100G super-channel that can be dynamically shaped into any number of virtual channels and capacities to match actual client requirements. This can be achieved using ten parallel lasers each operating a maximal data rate, and again it is the compactness and efficiency of integrated circuitry that makes such complexity not only possible but also a practical solution. A key strength of DWDM optical transmission has been its almost limitless potential capacity, but the downside has been its lack of flexibility between rigidly determined channels. The use of PICs to create super-channels transforms optical networking to enable an Intelligent Transport Network that combines the scalability of DWDM transmission with the simplicity and functionality of converged digital switching. Just as the move to integrated circuits has made it possible to leverage developments in microprocessor technology, so has the Intelligent Transport Network gained from developments in digital networking – notably Software-Defined Networking and its use of a Control Plane that is distinct from the Data Plane. In place of time consuming manual adjustments to racks of optical equipment, today’s Intelligent Transport Network has an end-to-end carrier grade Control Plane that can automatically reconfigure the network to meet the requirements of customers and applications. For the carrier it has changed the rules of the game – they arealready building networks that scale to terabits without requiring forklift upgrades. Networking complexity, while growing in physical detail,is being effectively reduced for the operator who can now reshape the architecture with“plug and play” ease and rely on digital multilayer automation to take care of on-going operations. At the same time the carrierscan enjoy the greater efficiency of converged functions, higher density, lower maintenance and reduced power consumption. Intelligent Transport Network in practice What does this mean in “the real world”? Carrier networks require a massive investment in infrastructure, and their daily operation is too critical to allow major changes to be rolled out without a lot of careful planning. The 500Gbps super channel solution has only been around since mid 2012, but there are already 46 DTN-X customers globally, with over 1 Petabit/second of 500G super-channel capacity in service, major carriers are reporting significant benefits. Pacnet, based in Hong King and Singapore, is a leading service provider to the enterprise and carrier markets, serving an APAC market that includes many of the world’s fastest growing economies, populations and broadband internet demands. Pacnet owns and operates a pan-Asian submarine cable network with 19 cable landing stations, and points of presence from India to the USA. In one of the largest regional deployment of 100 Gbps technology to date, Infinera helped Pacnet build out their new optical network infrastructure based on 500Gbps super-channels within nine weeks. Their Intelligent Transport Network incorporates a new standards-based resiliency technique for networks to recover from failures without the need to dedicate backup bandwidth for each active circuit. A purpose-built hardware acceleration chip included in DTN-X means that customers can now survive multiple fibre cuts with less than 50ms recovery – a vital assurance in subsea networks in a hostile environment where it is not easy to access the cable for repair and maintenance. In addition to space and cost savings from the compact solution, consuming up to 50% less power, cable-landing stations has been simplified with fewer failure points for easier maintenance and troubleshooting. Most important in a business environment where customers want more speed and capacity at lower cost and on demand, Pacnet can now provision services rapidly between any two terrestrial locations, even across a long reach subsea network, and circuits can be reconfigured in seconds while maintaining maximum capacity and flexibility. According to Pacnet CTO, Andy Lumsden, the solution “greatly advances our Layer 3 to Optical convergence” and givesPacnet“a clear advantage in offering fully protected services and restoration capability for our customers.” Telstra Global is another company upgrading long-haul submarine cable networks, in this case across North Asia, Hawaii to California and Sydney to Hawaii routes. Darrin Webb, Chief Operating Officer for Telstra Global says: “Demand for network services in the Asia Pacific region is growing exponentially and the addition of Infinera’s DTN-X platform means we will be well placed to meet the speed and capacity needs of our customers.” Now able to deploy highly reliable, differentiated services while reducing costs through scale, multi-layer convergence and automation, Telstra Global is another great example of how long-haul super-channels are benefitting global business and linking distant communities. Ideal for the APAC market According to James Walker, President, CloudEthernet Forum, the APAC region faces a special challenge for cloud services, because the datacentres are so far apart. Whereas datacentres in the USA, and especially in Europe, are likely to be fairly close: “typically in Asia they’re in Hong Kong and Singapore, or Tokyo and Hong Kong, or Sydney. In other words, they are widely separated and this has a wide-ranging impact on how people design their network and datacentre environments – which then has a knock-on effect across the entire globe.” Managing network capacity becomes extra critical across these greater distances, and reliability and efficiency are vital for submarine cabling. Not surprisingly, APAC carriers are showing a lot of interest in the flexibility and easier management available across today’s high capacity super channel networks.

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }

Increased enterprise agility and a dramatic boost to user productivity are promised by the latest wave of dynamic CE services now coming to market – a market set to grow by several percentage points in size to over $50bn globally in the next five years. Research and analysis firms Frost & Sullivan, Vertical Systems andInfoneticsare all predicting an Ethernet services market worth approximately $50 billion by 2015, several percentage points ahead of the present market position. The MEF (MetroEthernet Forum) is the catalyst behind today’s $45B global Carrier Ethernet services market. At the MEF’s inception in 2001, the “metro Ethernet” market was fragmented into a number of services – e.g. Optical Ethernet, Switched Ethernet, and Metro Ethernet – with vastly different capabilities, often without carrier class capabilities or service level agreements (SLAs) and limited to “best effort” performance. The MEF created a collaborative environment, including service providers and network solution providers, to jointly define and standardize “Carrier Ethernet” towards today’s high quality service. By creating technical specifications, implementation agreements and certifying services, equipment and people, the MEF has enabled a holistic ecosystem responsible for Carrier Ethernet’s subsequent market growth. What now, and what is needed Packet-centric applications now dominate circuit-based applications and voice, video and data all share a common network infrastructure with the risk of conflict and service degradation. Voice communications is decoupled from the underlying infrastructure of telephones and PSTNs and runs as an “app” on devices connected to the Internet. No longer is the service either up or down, with the presence or absence of a dial tone: VoIP can suffer impairments such as echo or voice distortion, through dropped or delayed delivery of voice packets. A better service can be assured using private networks, but at the cost of reduced flexibility in terms of activation times and purchase models where the service providers require long-term leases to commit to the service assurances required. We are, however, moving rapidly towards an even more dynamically connected future. Machine-to-machine (M2M) communications will push connectivity way beyond the number of connected humans, with connected cars, smartwatches and devices, tablets, intelligent control systems and sensorscoming on line and communicating to automate our lives. Each of these applications will demand their own service levels, and degradation will be unacceptable in many cases. This will only be possible if the network infrastructure transforms to enable cloudand mobile services that connect people and machines in real-time, on-demand, with assured QoS and quality of experience (QoE). As a practical example, consider mobile workers connecting over the Internet via IP VPN to their office network. This is fine for checking e-mail, swapping documents etc, but critical communications such as a videoconference can suffer degradation from other users sharing access or from congestion in the ISP’s network. It should be possible to request (and be billed for) a higher performance connection to the office just for the duration of the connection. For the second example, an enterprise subscriber wants a network service to interconnect locations to their virtual machines (VMs) or Virtual Network Functions (VNFs) in a remote data center. This is only possible by using a number of transit service provider networks between the data center and the locations. So each of these network operators needs to orchestrate the setup of an appropriateinternal networks and each of these operator-specific orchestrations need to be reconciled together to ensure the full end-to-end service required. Orchestration between the service provider and the cloud provider is required to automate the service ordering, provisioning, and management (OAM) of the virtual connections across each respective network and to setup the physical and virtual endpoints. This is a complex job that can take months, but should be delivered promptly on demand to meet real business needs. To support agile business we need connectivity between physical or virtual endpoints with dynamic attributes to suit on-demand cloud services. Real-time applications that monitor performance should evolve to automatically request, or prompt the user to request, different classes of service as needed – eg reduced packet loss for the duration of a videoconference. The customer need only input basic information to order the service – e.g. service endpoint locations and service bandwidth in addition to billing information – in a manner similar to ordering cloud services, where components are ordered, fixed and recurring costs totalled, then the order is submitted. Progress to date The challenge of deploying networks across third party access vendors is already being addressed by a combination of existing technologies – Carrier Ethernet’s ubiquity and standardized connectivity; Software-defined networking (SDN); Network Functions Virtualization (NFV) and real time Big Data analytics to correlate data from the many network elements and OSS, and continuously analyse it. Whereas barely 20% of Carrier Ethernet services succeed first time, and it can take over a hundred days to turn up a circuit, these principles have improved inventory integrity to over 90% accuracy, significantly reducing fall-out and improved time to market – while on-going automation of auditing and inventory updates is cutting OpEx. The solution began with a data audit extracted and mapped to a structured format – including OSS sources, activation notices, SLA agreements with AVs, excel spreadsheets and inter-carrier agreements. Automated continuous audit could now identify bad data and even assign a quality indicator to simplify integrity assessment. Continuous correlation plus big data analytics identify risky changes, and check consistency and value ranges, and warnings are transmitted to the data owner. The system also provides a graphical overview of the topology, revealing actual circuit inventory details, simplifying ordering, provisioning and service assurance. In a second example, workflow automation is cutting costs and accelerating service turn-up, leading to rapid growth of the provider’s footprint and capacity. An Additional Services Request (ASR) – eg Move, Add, Change or Delete network functions – is transmitted to the access vendor by web form, and changes are automatically broadcast to all network elements, without delay or risk of human error. This includes populating test equipment with updated test configurations so SOAM tests run automatically, and results are collated and reported. In a third example real-time feeds are taken from existing monitorsand summarized in a single customizable dashboard – registering alarms and correlatingthem to circuit segment states. Thresholds are set for each access vendor and used to benchmark SLA performance so reports can indicate exception events and leverage historical data to determine trends. Without manual work, the provider now benefits from lower MTTR, faster triage and root cause analysis – thanks to rapid, accurate isolation of degradation and better SLA penalty capture with authoritative proof and reporting. Conclusion The Carrier Ethernet market has reached a turning point. The victim of its own success, it has given business a taste of global networking benefits, and is now struggling to deliver those advantages as seamlessly and fast as agile business requires. The MEF is aware of the need and the challenges, and is laying the framework to enable new types of network connectivity, better aligned with cloud services and opening up new revenue opportunities for service providers and the ecosystem of network solution providers. This is good news for the enterprise and, ultimately, for the global economy. By Kevin Vachon, CEO of the MEF

[via India Telecom News]

Follow us @wirelessheat – lists / @sectorheat

{ 0 comments }