This is a discussion paper as a response to the call for comments on another paper, titled as An Analysis of the “New IP” Proposal to the ITU-T, posted in the Internet Society’s website that intends to represent the ISOC’s opinion and position. My hope is that this paper will also be posted in the ISOC’s same place so that the community easily gets a full knowledge of the topic and knows the complete story from the both sides to make their own judgement on the topic. Comments are welcome, please email those to firstname.lastname@example.org.
The conceptual research that underpinned the Internet was started in early 1960’s by Paul Baran, Don Davis and Leonard Kleinrock on Packet Switching. In 1974 Vint Cerf and Bob Kahn published their TCP/IP paper, in 1981 the Internet Protocol IPv4 was published as RFC 791, and in 1995 the Internet Protocol IPv6 was published as RFC 1883. The routing protocols have seen evolution and innovation, as have the transport protocols with the introduction of QUIC. The DNS system has been improved and MPLS has had significant development.
The network layer data plane protocol IPv6 is being used to connect networks and terminals to the Internet, and there have been many success stories. The IPv6 design is solid, but has not solved every problem both now and in the future, and the Internet data plane will not remain unchanged forever. There will need to be further evolution of the Internet at the Network Layer to handle new innovations and new use cases. When IPv6 is used to support and implement Operational Technology (OT) and connect industrial networks to the Internet, it is observed that Operational Technology has different characteristics from Information Technology (IT). For example, OT has a stringent requirement on high-precision end-to-end guarantee for short latency and elimination of packet loss between a factory automation controller and industrial terminals on the factory floors and fields. Terminals in industrial networks often have non-IPv6 addresses. Many industrial use cases show that technical gaps emerge when using IPv6 for industrial machine-type communications. Because of such gaps, some popular industrial control systems and their communication protocol stacks do not use the TCP/IP stack for their real-time control. In order to connect such industrial networks and their terminals to the Internet, we need to optimize the existing protocol stack where possible and add new functions where necessary.
A review of IP, as it exists today, reveals that the IP, at the network layer, provides three basic capabilities and services that are available to the upper layers: (1) Best-effort forwarding: which is default and the most commonly used service. It does not provide any guarantees that data is delivered or that a user is given a guaranteed quality of service level or a certain priority; (2) DiffServ: which provides per-hop behaviour differentiation among up to 8 classes of traffic, but which doesn’t provide a path-level end-to-end guarantee for any application; (3) MPLS Traffic Engineering: In the history of the Internet protocols, there was a technology called IntServ (RFC 1663), which uses RSVP (RFC 2205) to reserve bandwidth for an application. RSVP was later adapted and enhanced into RSVP-TE for MPLS traffic engineering. As with IntServ, RSVP-TE guarantees bandwidth for a forwarding path, but does not guarantee low end-to-end latency, high throughput, and the elimination of packet loss.
There is no denying that IP has been and will continue to be successful and has many advantages. However, some of the principles that have made the Internet thrive in the last few decades do not extend to all domains, in particular to industrial network domains, through the use of a single, general-purpose “one size fits all” IP. Such homogenization, everywhere from clients to networks and to servers, results in an Internet that will increasingly be consolidated and ossified. Internet consolidation, as reported in ISOC’s 2019 Global Internet Report [GIR2019], is reducing opportunities for market entry and competition, and Internet ossification prevents the ability of IP to address new needs and adapt to new requirements in an acceptable time scale. In order to prevent this consolidation and/or ossification, the existing protocols need to be optimized where possible and new functions need to be added through extensions and updates where necessary. New protocols, or their components, may need to be developed, though only when there is a broad agreement that shortcomings of the existing protocols prevent application needs from being addressed. Moreover, when new protocols are developed, they need to be backward compatible.
In order to support emerging industry verticals and to connect more networks and terminals to the Internet, a new network protocol needs to be developed for complementing the capabilities and services provided by the existing IP. Its goal would not be to replace the existing Internet protocols. Rather, its goal is to work alongside those existing protocols to support the needs of and to connect those applications that have not been connected to the Internet yet. It is expected to be used in an autonomous system with a geographic limit to support use cases such as Connected Industries and Automation, Driverless Vehicles and Transport, IP Mobile Backhaul Transport for 5G/B5G URLLC, and some of the use cases set out in [UC2030] published by the ITU-T Focus Group on Network 2030. The major intents of this new protocol are: (1) to provide a mechanism to enable routers and switches to implement high-precision and deterministic communications to guarantee high throughput and low latency, and to eliminate packet loss; (2) to provide a free-choice addressing mechanism to allow network operators and application developers to have the freedom in choosing the most effective addressing system for their domains and applications; (3) to provide an extensible innovation-enabling mechanism to allow the introduction of new innovations in the forwarding layer; (4) to provide a mechanism to allow for large volumetric media such as AR/VR and holograms for some futuristic applications; (5) to embody an intrinsic security mechanism to protect user’s privacy, confidentiality and security and to prevent DDoS (Distributed Denial of Service) attacks; (6) provide interworking methods with the existing IP and capabilities applied to industrial network domains. As this new network protocol is supposed to introduce the above-mentioned improvements and extensions while keeping compatible with IP, this protocol is tentatively referred to as “New IP”. There have been many publications that address one or more aspects of New IP in ITU, ACM and IEEE.
While IPv4/IPv6 are good in supporting Information Technology (IT), New IP is aimed at adding support for Operational Technology (OT) and at converging the existing Internet and many other networks that have so far not been connected to the Internet. Historical OT networks are not part of the Internet and often use their own protocols optimized for the required functions in their domains. Connection of those domains to the Internet usually requires border gateways. New IP makes it viable to connect OT networks and their terminals to the Internet directly, so that specific border gateways are not required. Related application domains include “Connected Industries”, “Cyber-Physical System (CPS)”, “Industrial IoT”, “Industry 4.0”, “Industrial Internet”, etc.
New IP has come from being a small project to what is now a multi-party community project with participation from many organizations throughout the world. Some network operators and industrial-manufacturing-related companies have shown their interest in New IP. Even though there have been some Proof-of-Concept (PoC) implementations and publications on New IP or its components, there is no official standard, and no Standards Developing Organizations (SDOs) have yet accepted New IP as a starting point for standardization.
Given the above as background, some proposals to work on New IP-related topics as “question for study” within ITU-T’s next study period have been made [C83]. Recently [SHARP] presented an analysis of this proposal. A review of [SHARP] leads me to conclude that some of the statements, assertions, and opinions in [SHARP] are either misleading, over-stated, speculative, or insufficiently justified. The conclusions of [SHARP] do not do justice to New IP, and in turn they will limit the future evolution of the Internet and the convergence of IT and OT.
The Internet community, including ISOC, should adopt an inclusive and open-minded approach. It should welcome, encourage and support all efforts, including New IP, that make the Internet better serve people, the economy, and industry. The community should take this position regardless of whether such efforts are made within IETF or ITU or elsewhere. ISOC should thus encourage SDOs to initiate the standardization of New IP to make it as an open standard.
This paper documents some notes and comments on An Analysis of the “New IP” Proposal to the ITU-T [SHARP], whose authors and editors have publicly asked for discussion and opinions on multiple mailing lists. As I am one of those who started the New IP project a few years ago, I consider it my duty to share with the community my opinions and comments on [SHARP]. Though I am the Chairman of the ITU-T Focus Group Network 2030 and have been deeply involved in New IP, I am responding on my own behalf. I hope that my comments are helpful to the community.
Before I start, I would like to thank its authors, Hascall Sharp and Olaf Kolkman, for their review of the work that is being conducted under the name “New IP”. I believe this presents a useful opportunity to discuss the intent of this work and the issues that are raised, and I’m pleased to have that opportunity.
I want to emphasize that the name “New IP” is not meant to imply a goal of replacing the existing Internet protocols. Rather, the goal is to support the needs of upcoming novel networked applications and innovations by using the existing protocols where possible and enhancing them through extensions and updates where necessary. New protocols, or components, should only be developed when there is a broad agreement that shortcomings of the existing protocols prevent application needs from being addressed. Moreover, when new protocols are being developed, they need to be backward compatible, and development needs to happen in the relevant standards organizations, through discussion among the appropriate communities.
The major intents of this new protocol are: (1) to provide a mechanism to enable routers and switches to implement high-precision and deterministic communications to guarantee high throughput and low latency, and to eliminate packet loss; (2) to provide a free-choice addressing mechanism to allow network operators and application developers to have the freedom in choosing the most effective addressing system for their domains and applications; (3) to provide an extensible innovation-enabling mechanism to allow the introduction of new innovations in the forwarding layer; (4) to provide a mechanism to allow for large volumetric media such as AR/VR and holograms for some futuristic applications; (5) to embody an intrinsic security mechanism to protect user’s privacy, confidentiality and security and to prevent DDoS (Distributed Denial of Service) attacks; (6) provide interworking methods with the existing IP and capabilities applied to industrial network domains. As this new network protocol introduces the above-mentioned improvements and extensions while keeping compatible with IP, this protocol is named “New IP”. There have been many publications that address one or more aspects of New IP in ITU, ACM and IEEE.
There is no denying that IP has been and will continue to be successful and has many advantages. We want to ensure that the Internet, as it is built today and as it is enhanced in the coming years, is prepared for the bold, new applications that we anticipate and that some service providers will need to deploy. We also want to ensure that the Internet is robust enough to support the increased needs of those applications. Meantime, we want to uphold the principles that have made the Internet thrive in the last four decades: “autonomy” in the Autonomous Systems, “independence” in its operations, “openness” with respect to everyone everywhere, and “freedom” when building and connecting networks and applications.
Believing in open standards and believing in allowing everyone and anyone to innovate inside the Internet, our intent is always to bring proposals to the SDOs — IETF, ITU, ETSI, 3GPP, W3C, IEEE, and others — that have the responsibility for the protocols and technology in question. It is also our intent to always do any necessary protocol work in the relevant organizations through the open standards process that has resulted in a successful Internet, which has already changed the world over the last few decades.
While recognizing the success of the existing IP for the consumer use where “connectivity” is a central goal, we want to ensure the future success of IP for industrial use where information delivery adhering to stringent business-critical performance targets is essential.
My notes and comments are structured as follows: In the section of “General Comments”, I provide more background on New IP, and clarify what may be misunderstandings, overstatements, or speculations found in [SHARP]. In the section of “Detailed Comments”, I analyze [SHARP] and offer my views. In the section of “Concluding Remarks”, I summarize this article.
Before an official name is given by the SDO that standardizes it, “New IP” has served as an umbrella term under which multiple independent efforts are being made across different countries and organizations with the goal of improving the Internet to better serve new applications and connect more networks to the Internet – networks with stringent performance requirements such as commonly found in industrial applications. In particular, New IP is aimed at connecting industrial networks and their machines at the Network Layer (Layer 3, IP Layer) for industrial control and automation, which is studied by Operational Technology (OT). OT has different characteristics and requirements from Information Technology (IT). While connectivity is an essential goal in IT, information delivery adhering to stringent business-critical performance targets is essential in OT.
New IP is not to splinter the Internet. Rather, it is to connect more networks and terminals that have not been connected to the Internet yet. New IP will enhance and future-proof the Internet by providing more capabilities and features to network operators and application developers particularly in business-critical industrial domains. New IP is designed so as to provide for easy extensibility and adoption to new business needs, breaking through the ossification barrier in order to enable and encourage rapid innovation. It is our intent that New IP will be used to connect to the Internet more networks and terminals that have not been connected to the Internet until now. An example is Profinet networks where New IP can connect their factory controllers and Class B terminals in more scalable Layer 3 networks to meet stringent requirements on the performance metrics such as very low latency and lossless control information delivery.
From the beginning of the Internet, it has been assumed that the design of the Internet will need to change, evolve, and adapt to meet new requirements to better serve people, the economy, and industry. It should allow and provide instruments to everyone and anyone to innovate inside the Internet.
I would like to believe that “top-down approach” [C83] might be an unfortunate misnomer, since it easily invites misconception and unnecessary speculation. As a matter of fact, it denotes “vision-driven approach” or “goal-oriented approach”, which have proven, in my opinion, very successful in SDOs such as 3GPP. The “vision” is often specified as “use cases and requirements”: it starts with a vision or a goal, which then is decomposed into a set of sub-goals or detailed requirements, and ends up with a solution as a result of collaborative work on the sub-goals.
New IP, as a candidate, can be deployed in autonomous systems where business-critical applications are needed. As many industrial machine-type communications require low latency and lossless information delivery, networks for such communications are often deployed within a limited geographic range.
In what follows, the quotes from [SHARP] are numbered and italicized.
1) The Internet continues to evolve at a rapid pace. New services, applications, and protocols are being developed and deployed in many areas, including recently: a new transport protocol (QUIC), enhancements in how the Domain Name System (DNS) is accessed, and mechanisms to support deterministic applications over Ethernet and IP networks. These changes are only possible because the community involved includes everyone from content providers, to Internet Service Providers, to browser developers, to equipment manufacturers, to researchers, to users, and more.
This is correct, but there are some additional key points.
Firstly, the Internet structure has significantly evolved from the traditional access-core-access model to a model in which servers are placed at the edge – embedded in the access network or a data center or a cloud that terminates traffic, where core traffic is minimized by being diverted through private global backbones. This means that the ability to innovate has changed, and for some use cases it is only possible to survive in commercial terms by being in a protected network.
Secondly, it can be noted that QUIC emerged from a single but big company’s proprietary implementation that only worked because of the scale of the organization that invented it, and because they owned both the browser (front end) and a lot of service hosting capacity (back end). If an “ordinary” player had proposed to the IETF TSV area the building of a new transport by tunnelling over UDP, chances are that we would still be discussing it. Consider for example how long it took the IETF to accept the need and utility of NATs.
Thirdly, as Prof. Jennifer Rexford said in her ACM Sigcomm keynote speech [JRACM], there have been innovations above, under and alongside the “Network”, but not much inside the “Network”, and we are desperate to Innovate Inside. The listed examples above by [SHARP] are not “Innovate Inside the Network”.
In summary, the unfortunate truth is that it is much harder to innovate than [SHARP] suggests, and any meaningful innovation occurs only over increasingly lengthy time cycles. This is becoming increasingly detrimental to further progress in the networking industry.
2) Given this backdrop it is concerning that a proposal has been made to ITU-T1 to “start a further long- term research now and in the next “study period” to develop a “top-down design for the future network.”
Research is harmless at worst, and might lead to useful insights into how to build a more capable network layer. While some study has been initiated in ITU-T, it isn’t the only place where this is being studied, and that the intent throughout is to bring protocol extensions, enhancements, and additions to the appropriate SDOs for open discussion and development. It is only deployment that is a valid point of contention, at which point consideration has to be given to integration, co-existence, joint work etc. So long as it is a study then it should only worry an organization that is afraid that it would not be able to compete to better serve the needs of the user community. As said earlier, “top-down” had better be understood as “vision-driven” or “goal-oriented”. “top-down” by itself is just a methodology for performing some work. It is not a symbol of failure or mistake.
3) The need to support Deterministic Forwarding globally.
This is not an unreasonable requirement, and in any case there is no consensus among the design community that this is not needed, especially where this is required across one access AS and another AS that terminates the service.
On the other hand, I see that [C83] only proposed “deterministic forwarding” as an additional capability that would be expected from inside the networks for business-critical use without stating that this is required “globally”. Requiring global capability which would mean everywhere and on every device is not the intent, and is a misunderstanding by the authors of [SHARP]. The requirement instead applies only to those portions of the network where it is specifically needed.
4) The need to enhance security and trust and support “Intrinsic Security”
I would think that ISOC would support and welcome all efforts to understand how to do this better. Enhancing security for the Internet is always a worthwhile effort.
Despite all security efforts, Internet security remains a concern, consisting of a patchwork of multiple mechanisms still faced with multiple challenges on deployment, operational, and technical fronts. The IETF Security Area is itself putting major emphasis on enhancing security and trust.
5) ommunicating over multiple, heterogeneous technologies (including satellite systems), and avoiding islands of communication due to the diversity of networking technology, have been core design goals in the evolution of the Internet over the last 40 years.
I agree that interconnecting networks was indeed an original requirement, but it has gotten lost since the Internet Protocol started to dominate as networking technology, not just as network interconnecting technology. Again, this is evidenced by the problem of ossification, i.e. the increasingly massive hurdles to introducing new capabilities and features to network protocols ([MN]), which has hampered further progress for networking as a whole. While many networks have been connected to the Internet, many others have not, and some have even given up using the Internet protocol stack. One goal of New IP is to connect those networks to the Internet as well, by removing some of the barriers that have blocked use of existing protocols in those networks.
6) The IETF’s deterministic networking [DETNET] and reliable and available wireless [RAW] working groups, and the IEEE 802.1 Time Sensitive Networking [TSN] task group, are developing standards related to deterministic networking, liaising with ITU-T SG15 and 3GPP.
The IETF DETNET WG is explicitly excluded from modifying the network layer, and while it has technology to enhance the probability that a packet in an MPLS network survives a congestion issue, there is no corresponding IP solution that has gained WG adoption or general acceptance. Note that “reducing packet loss probability” is not the same as “eliminating packet loss”. Even if MPLS RSVP-TE is used to build a tunnel to transport time-sensitive packets, what is guaranteed is the minimal bandwidth over the tunnel. There are no mechanisms to guarantee end-to-end throughput, nor high-precision latency, nor elimination of packet loss. So, the DETNET WG is limited fundamentally by what the IP layer, which is designed for global connectivity, can provide with constrained extensibility.
I am glad that the authors of [SHARP] mentioned IEEE 802.1 TSN and ITU-T SG15. However, please note that their solutions are provided at Layer 2 and Layer 1, respectively. I see a problem here: can we provide deterministic or high-precision communications at Layer 3? I see potential value here if the New IP community chooses to study it.
7) The IETF addresses security in specific protocols (e.g., BGP Security (BGPSEC), DNS Security (DNSSEC), Resource Public Key Infrastructure (RPKI), etc.) as well as by requiring a security consideration section in each RFC, taking into account research and new developments. The IEEE addresses Media Access Control (MAC)-level security in its protocols (e.g., IEEE 802.1AE, IEEE 802.11i).
There is still a lot more to do in terms of security, and it is not clear whether fundamental limitations in the IP design are a road-block. There is absolutely no harm in studying whether fundamental changes will result in an improvement. Also, we know a lot about what you might call “static” security and its applicability to a best-effort network. How to secure dynamic behavior is something that we are only just learning about.
When the Internet was designed, in many places security was not built in. When a security problem shows up, security is developed as an add-on feature. Trying to add security as an afterthought is merely a bandage or painkiller that helps for a while, but cannot fix the fundamental underlying problems. Look at DDoS amplification attacks or phishing and false impersonation facilitated by IP spoofing. These are major problems today causing significant damage, due to the inadequacy of the underlying design.
As said earlier, I would think that ISOC would support and welcome all efforts on how to make the Internet more secure, especially when new classes of IoT devices and industrial machines seek to connect to the Internet.
8) The IETF Transport Area develops transport protocols (e.g., Stream Control Transmission Protocol (SCTP), Real-time Protocol (RTP) and Real-time Communications for the Web (WebRTC), and QUIC) and active queue management protocols (e.g., the Low Latency, Low Loss, Scalable Throughput service architecture (L4S) and Some Congestion Experienced (SCE) ECN Codepoint). These increase throughput, lower latency, and further support the needs of real-time and multimedia traffic, while considering interactions with, and effects on, TCP traffic on the Internet.
This is true, but these efforts do not study how non-trivial changes to the network layer and changes to the transport layer might work in harmony to achieve a better result. It is an undeniable fact that no matter how and what changes are made in the existing transport protocols, applications will always suffer from packet loss when a congestion happens. The congestion happens in the network, but the only IETF permitted approach to this is to act on discovery after the fact by the host, or simple, fairly crude and often inaccurate notification by the network through inference of packet loss and delay, or through the ECN mechanism, or to apply hop-scoped queueing without knowing the details of application’s pattern and expectation. This is very limiting, as it does not allow for improvements that take the context of a bigger picture into account.
The IETF’s insistence that transport protocol updates are only allowed on hosts, with very limited changes to network devices, has closed the door to a wealth of transport innovations proposed in academia.
Google Scholar returns more than 10,000 publications on TCP Congestion Control. Among such a wealth of study, almost none of the countless in-network innovations have been adopted because they do not comply with the so-called “end-to-end principle”. The “end-to-end principle” was a design decision 46 years ago; it is not a physics law. I am not questioning that decision; to the contrary I believe that this approach was viable at the onset of packet network technology at that time. However, with so large a body of research results over 46 years and significant advances in hardware and software engineering since then, it may make sense to revisit some of the original assumptions that were made about how networks should be designed as constraints and context have changed. What was impossible or unreasonable to do 46 years ago may be entirely possible and viable today. A great number of these proposals are limited by what a network layer can offer with respect to limited ECN framework and not globally deployable DSCP framework in the IP header. The recent L4S proposal showcases how limited network layer options are (with 1-bit repurposing) and if we can ever be ready for the new requirements on the horizon. It is not unreasonable to at least ask the question and investigate what benefits we could reap if we allow the network layer and the transport layer to work in better harmony.
To reiterate, New IP proposes to include “transport” as a study item, where the “transport” means to move information from one place to another, but it is not aimed at replacing the existing transport protocols. Rather, it wants to design an approach that will complement the existing protocols and add capabilities the existing protocols cannot support. The intention is that new features/capabilities/services developed within New IP will operate at the network layer and will be offered to upper layers including the transport layer. It seems to me that investigating novel alternatives to overcome existing limitations is prudent rather than harmful. We should welcome whoever wants to conduct such an investigation.
9) Creating overlapping work is duplicative, costly, and in the end does not enhance interoperability.
Not doing work that can result in new capabilities and overcome existing limitations is also costly. It delays realization of the benefits of a solution to real needs that otherwise go unfulfilled, resulting in lost opportunity. At the end of the day, it is all a matter of trade-offs: the size and cost of the new work vs the impact having to live with the limitations of the existing approach that does not meet the needs and lets opportunity go unfulfilled.
The claim in [SHARP] that New IP is overlapping and duplicative is a misleading statement. The goal for New IP is to offer what IP does not offer in a progressive and evolutionary way, while keeping backward compatible with IP, as discussed in [NIP] and other publications. New IP complements IP and is intended to connect to the Internet the networks and their terminals that have not been connected to the Internet for certain types of business-critical industrial use.
10) The alleged challenges mentioned in the proposals are currently being addressed in organizations such as IETF, IEEE, 3GPP, ITU-T SG15, etc. Proposals for new protocol systems and architectures should definitively show why the existing work is not sufficient.
Yes, some challenges are being addressed in those organizations. But some are not, and the New IP project is looking at a specific set of use cases and the associated requirement at Layer 3 rather than Layer 1 or Layer 2. The fact that some people are working on solutions should not prevent others also working on solutions until fully functional solutions are found that meet the new requirements.
Moreover, it should be noted that no comprehensive solutions have been found to the problems that New IP is aimed at solving at the Network Layer (IP Layer). It is true that optical technologies can, for example, support high-precision communications, but that is at a lower layer than the Network Layer (Layer 3, IP Layer) and is more constrained in its deployment possibilities.
11) Although the term “New IP” is frequently used and the proposals would replace or interact with much of the Internet infrastructure, the proposals have not been brought into the IETF process.
No specific proposals have been made yet, simply because no SDOs have started the standardization process yet. We are still in early stages of developing requirements and a gap analysis here. Specific proposals that are ready for action will come next. This does not mean that we do not have early proposals. We do, and we intend to submit them. However, to be clear and avoid misunderstandings lest we be accused seeking to have our proposals “rubberstamped”, we consider these proposals merely as the starting point or catalyst for discussions. It is our intention to let an SDO process run its due course, starting with an articulation of the problem statement and analysis of existing gaps.
As it is required to provide more functions, capabilities and improvements while keeping compatible with the existing IP, it is named “New IP” before an official name is given by the SDO that standardizes it. New IP has come from being a small project to what is now a multi-party community project with participation from many organizations throughout the world. Some network operators and industrial manufacturing-related companies have shown their interest in “New IP”.
“New IP” is best regarded as an umbrella under which a number of innovations are being made. It is still in progress with many open questions and room for proposals and discussion of technical alternatives. Contribution [C83] needs to be seen in that context; it is simply a tutorial on some topics given by the authors of [C83] and constitutes their opinion that is articulated from their point of view, but a single, mature, agreed-upon technical consensus has yet to emerge.
Personally, I had an opportunity to engage with IAB/IETF/IRTF to explore the possibility of bringing New IP to the IETF/IRTF during the week of IETF 106 in Singapore, but was given an impression, which I would look forward to being corrected, by its officers that New IP would not be welcome there in IAB/IETF/IRTF, which puts us in the chicken-and-egg loop. If the IETF is not interested in it, then the IETF should not seek to prevent other SDOs from addressing this topic. It seems to me that further discussion in the IETF is needed, to discuss the use cases, to discuss the requirements, to discuss the gap analysis, and to determine the best way and the best venue in which to develop solutions.
12) he billions of dollars of investment in the current protocol system and the effects on interoperability to prevent the development of non-interoperable networks. Any new global protocol system will be costly to implement and may result in unforeseen effects on existing networks.
This is indeed a barrier to entry, and is one that inappropriately favours the status quo. To reiterate, New IP is always intended to be backward compatible and interoperable with the existing protocols to the furthest extent possible so that the existing investments are protected.
13) The need for business and operational agreements (including accounting) between the thousands of independent network operators. Implementing a new protocol system is not simply about the protocols, there are myriad other systems that will need to be addressed outside the technical implementation of the protocols themselves.
Indeed. I think that we can all agree that this all comes down to economics. If the cost and difficulty of deploying the new technology is not overwhelmed by the benefits and economic advantage that the new technology brings, it will not be deployed.
14) The likelihood that QoS aspects of the proposal would complicate regulatory and legislative matters in several areas. These areas could include licensing, competition policy, data protection, pricing, or universal service obligations.
This is perfectly understood, but it is a bit like saying that we should not have developed cars because some cities were designed and regulated with horses in mind. That is not the way new technology works. New technology is deployed where it demonstrates advantage, and those that deploy it thrive while those that deny it decline. If new QoS has advantages, then it will be deployed where it has those advantages. That may be in private networks that wish to use packet technology but need extra features – for example, in industrial machine-type communications.
Now let’s take a look at the history. In early days, there was a difference between telecommunications and data communications, and there were clear regulatory restrictions and boundaries on how data communications were used to implement telecommunications. Since then technology has progressed, and the regulations have evolved as well.
After all, regulation is intended to be the servant of the people and not a fundamental constraint on the advances associated with the progress of technology. We should not confuse the use of regulations as a barrier to technological developments and fundamental research. They exist to offer those technologies in a fair and broader sense.
If there is advantage, ultimately the technology that better delivers the required need will prevail. That being said, most New IP applications are envisioned to be scoped in networks that have not been connected to the Internet or that need more capabilities and features beyond what existing IP provides.
15) When an organization (e.g., 3rd Generation Partnership Project (3GPP)) has identified a need to develop an overall architecture to provide services a successful model has been to identify the services and requirements first. Then work with the relevant standards organizations to enhance existing protocols or develop new ones as needed.
This is, indeed, what those advocating New IP are doing, and they would welcome the opportunity to work more broadly on this. As far as the IETF is concerned, so far we have not found a suitable opportunity within the IETF structure, and rather, have found strong resistance, if not hostility, when we tried to.
16) Developing a new protocol system is likely to end up with multiple non-interoperable networks, defeating one of the main purposes of the proposal. A better way forward would be to:
Multiple and non-interoperable networks exist today, and are not, in themselves, an issue. For example, at many levels IPv4 and IPv6 are not interoperable. Using the same logic, the above statement would suggest that inventing IPv6 was a mistake, and we would have been better enhancing, for example, IPv4 with NAT or IPv4 in IPv4. The IPv6 bet was that it was better for the long term. Similarly, MPLS can be considered as a non-interoperable network layer protocol, and was resisted at the time of its first proposal, but it ultimately turned out to be the key to deploying IP to the majority of western households.
With lessons learned from the history, New IP is being designed to be compatible with existing and possibly future protocols. For example, its proposed Free-Choice Addressing scheme in [NIP] and [NPDF] would allow users and applications to choose the best way to meet their addressing and network programming requirements.
17) Allow the FG NET-2030 to complete its work and allow the Study Groups to analyze its results in relation to existing industry efforts.
Review the use cases developed as part of the Focus Group’s outcomes
I would like to emphasize that New IP and Network 2030 are two independent streams of research, as I have already explained it in a Special Session on Network 2030 during the week of ITU-T TSAG meeting in February 2020 ([TD757]). Chronologically speaking, New IP started much earlier than Network 2030. To that regard, I would like to share with you some more information [NIP]:
- New IP is expected to support industrial machine-type communications, IP mobile backhaul transport for URLLC, emerging industry verticals, and some use cases of ITU-T Network 2030. New IP will connect more networks and terminals to the Internet.
- A technical report on Network 2030 Use Cases was approved by the focus group in January 2020 at its Lisbon plenary meeting, and now it is openly available in the homepage of ITU-T Network 2030.
18) Encourage all parties to contribute to further investigate those use cases, as far as they are not already under investigation, in the relevant SDOs.
Here I only partially agree, because this very much depends on whether an SDO is prepared to think sufficiently outside its own comfort zone. I absolutely want to encourage all stakeholders to contribute and participate, and I do think that the IETF would be the natural forum to do it even if I am feeling its resistance and hesitation.
19) At the September 2019 TSAG meeting, Huawei, China Mobile, China Unicom, and China Ministry of Industry and Information Technology (MIIT) proposed to initiate a strategic transformation of ITU-T. In the next study period the group aims to design a “new information and communications network with new protocol system” to meet the needs of a future network [C83]. This effort is in reference to the ongoing work in the Focus Group on Technologies for Network 2030. At the same meeting, Huawei gave a tutorial [TD598] illustrating their views in more detail and suggested that ITU-T Study Groups set up new Questions “to discuss the future-oriented technologies.”
The contribution and tutorial posit that the “telecommunication system and the TCP/IP protocol system have become DEEPLY COUPLED into a whole.” The ITU-T should therefore develop an even more deeply coupled system using a new protocol system, ultimately replacing the system based on TCP/IP.
There is a distinction between the question proposed for study, proposed solutions to the question, and tutorials on solutions. I have reviewed [C83] posted in the ITU website. The tutorial is a collection of some ideas and examples that have been discussed for a number of years. Once the question proposed for study is accepted, it will be up to the community to discuss proposed solutions. Different organizations may well make different and competing proposals; resolving differences will be a result of the discussion and the consensus process. Some proposals may be accepted and be subject to changes and revisions, some may be declined, new components may be added. It is also important to realize that there is interest in innovation in the network layer outside of China.
The implication in the above statement is that this is an initiative to disrupt the Internet and seek to replace TCP/IP. That is simply not true! New IP is not aimed at replacing any existing protocols. Rather, it provides more features/capabilities/services for the networks that are not connected yet. When a user does not need those features/capabilities/services, he/she just simply uses TCP/IP as it exists today. What works in the existing infrastructure will continue to work as it does now. Quite the opposite of disruption, the goal is expansion and enhancement to better support future innovation.
20) C83 claims there are three key challenges facing the current network:
“Firstly, due to historical reasons, the current network is designed for only two kinds of devices: telephones and computers. [. . .][The] development of IoT and the industrial internet will introduce more types of devices into the future network.”
“Secondly, the current network system risks becoming ‘islands’, which should be avoided.”
This is largely correct. It is an indisputable truth that the Internet was designed initially to support computers, and many people could not see wasting that precious resource on a POTS competitor. It is also true that IoT and Industrial Internet have needs that were not considered at that time, for example, Profinet field devices. Many OT networks are not connected to the Internet yet.
“Thirdly, security and trust still need to be enhanced.”
That is an undeniable truth, with which I fully agree.
21) ManyNets and “islands” of communications
A main pillar of the proposed new protocol system is the concept of ManyNets. ManyNets refers to the myriad heterogeneous access networks with which the proposed new system needs to interconnect (e.g., “connecting space-terrestrial network, Internet of Things (IoT) network, industrial network [sic] etc.”[C83]).
One argument is that the “diversity of network requires new ways of thinking.”
That is not an unreasonable position to explore. It should be emphasized that ManyNets, as discussed in [MN], are an existing phenomenon that is already emerging across the industry, which has a wide range of implications from how network technology is deployed to newly emerging requirements. One of its goals is to overcome the growing “ossification” of the Internet. It is not a concept that is newly introduced by New IP.
Another is that new technologies are developing their own protocols to communicate internally and that the “whole network could potentially become thousands of independent islands.”
That is correct. Consider many industrial proprietary networks. There are a few dozens of communication protocols for Industrial Networks and Industrial IoT, and their networks have not been connected fully to the Internet.
22) Under the discussion of ManyNets, the “New IP” framework proposes a flexible length address space to subsume all the possible future types of addresses (IPv4, IPv6, semantic ID, service ID, content ID, people ID, device ID, etc.).
In terms of addressing, the networking community is already heading in that direction, and indeed further. Look at network programming, LISP, HIP, DOA, ICN/NDN, and of course the way that MPLS labels are used. From time to time, we see new IETF drafts that discuss different addresses and/or their encodings in, for example, IPv6. In some industrial domains, the address of a machine may be an ID, may be just two bytes in length, etc.
Take a look at the existing Internet structure. It consists of autonomous systems (AS), and in most cases the same IP protocol is used both inside AS and between ASs. A border node that is supposed to be an Internet gateway is in reality a border router, since the same protocol is used on both sides of the border. Everyone everywhere has to use the same fixed addressing format. It is now clear that the IETF takes a position that 128-bit IPv6 addresses MUST be used everywhere for the whole Internet. While this is a convenience, it is also a limitation that will incur a cost to some industrial domains. After all, “autonomous systems” are supposed to be autonomous.
To enhance the existing IP that only allows the fixed format, New IP proposes a “Free Choice Addressing” scheme as an improvement that lets network operators and users choose the most suitable addressing system for their domains [NIP][NDPF]. The free-choice addressing scheme permits IPv4, IPv6, LISP, ITU E.164, and many others. The flexible-length address is a possibility that is still under research.
New IP by itself does not dictate the use of any particular addressing system. It is up to network operators and application developers to choose the best effective addressing systems for their own domains and applications. And because of that, IPv4 and IPv6 can still work as they do now.
23) The Internet architecture has proven to be adaptable as networking technology has evolved over the last 40 years, from 300 baud dial-up modems to multi-gigabit fiber. The decoupling of IP from the underlying network technology provides flexibility to support specific requirements on a particular network while allowing the different networks to be interconnected. Table 1 provides a subset of networking technologies over which IP runs.
The current Internet consists of upwards of 60 thousand independent “islands.”
We agree that the Internet architecture has been very successful in accommodating a wide range of underlying network technologies.
At the same time, it needs to be recognized that the problem of “ossification” is increasingly becoming an obstacle to internet innovations taking place within the Internet architecture itself. As pointed out in [JRACM], we have got many innovations above, below and alongside of the network, but we have limited innovations inside the network. The inside of the network does need to change, and we are desperate to innovate inside. The user programmability and software-defined networking are steps towards the “inside network” innovations.
24) These are called autonomous systems, with each making its own technology choices to serve its customers/users and interconnecting using interdomain routing protocols and bilateral agreements.
We are in agreement on that aspect of New IP. Indeed, New IP upholds the “autonomy” in autonomous systems and provides the user freedom in choosing the best effective addressing system for their own domains.
25) Experience has shown that most of the problems (including creation of “islands”) related to interconnecting networks are due to non-technical business, accounting and policy reasons. Defining a new protocol system will not resolve these problems.
The argument is that the existing IP protocol has insufficient capability to express the policy in the packet, especially in business-critical domains such as industrial control systems, and thus we need a new extension to express this policy to serve the industry. There will naturally be a need to solve the business and economic issues, but equally there may be economic incentives and indeed a rebalancing of the Internet economics.
At the same time, it should be pointed out that business and accounting considerations are not orthogonal to the Internet, but impose technical requirements as well. In particular in the area of accounting, the current Internet has in fact significant deficiencies, making it harder to account and for services and service levels that are being delivered by the network. This results in obstacles to the support of novel business models.
26) Deterministic Networking
C83 and its associated tutorials claim that some applications and services have tight timing (e.g., latency, jitter), reliability and loss requirements that are not necessarily met over the Internet today. Examples given of such applications are telemedicine (e.g., remote surgery), industrial, and vehicular applications. While telemedicine, industrial, and vehicular applications have run over the Internet for years, there have been challenges to deploying QOS to meet every demand. Recognizing this, deterministic networking is being studied and standards are being developed in several key organizations:
Efforts on deterministic networking are not, in fact, being developed at a level that satisfies the needs of these applications. We currently have no way of running deterministic networks outside a small very controlled and possibly single purpose network and there is no work on the native deterministic and high-precision delivery data plane within the IETF. Indeed, restrictions on the ability to change the data plane have prevented the IETF DetNet WG from addressing these missing capabilities. That is why New IP is proposing different solutions in this space.
- IEEE 802.1 Time Sensitive Networking (TSN) Task Group [TSN] is developing extensions to support time sensitive networking using IEEE 802.1 networks.
- IETF Deterministic Networking (detnet) and Reliable and Available Wireless (raw) working groups are developing RFCs to support deterministic networking on routed networks and to interwork with IEEE 802.1 TSN. The IETF’s Transport Area also continues its work in this area, for example its investigation of Low Latency, Low Loss, Scalable Throughput (L4S) Internet Service and active queue management.
- 3GPP is defining standards to support its 5G ultra-reliable low latency communications (URLLC) capability over the Radio Access Network (RAN) as well as interworking with 802.1 TSN networking.
- ITU-T SG15 is working with IEEE 802.1 TSN and 3GPP (5G) related to its transport- related Recommendations.
IEEE addresses it at Layer 2, 3GPP addresses it in the RAN, and ITU-T SG 15 addresses it at the optical layer. But there is a need to address it at Layer 3 (Network layer, IP layer). There is no agreed method of extending these new radio capabilities back through the IP-based backhaul network and then across the Internet. As chartered in the IETF DetNet and as commented in earlier notes, the DetNet does not change the existing data plane, and actually it is limited by it. There is a technical gap here if an IP-based data plane is deployed. The life of DetNet will be much easier with New IP as its underlying data plane.
28) The above listed efforts tend to focus on applications that exist within a single administrative domain. Any proposal that claims to guarantee delivery of information over a network within certain parameters must address the physical limitations associated with data traversing distance (e.g., the speed of light).
We do not dispute that: networks are, of course, governed by the laws of physics. Indeed, some networks and services may be geographically constrained. Applications will also require clear indications about which parameters can and cannot be supported for given communication instances. Nevertheless, improvements are possible, and necessary.
29) Intrinsic Security
The third challenge identified in C83 states that “security and trust still needs to be enhanced” and that “a better security and trust model need to be designed and deployed” in addition to promoting “secure and reliable data sharing schemes.” Several areas are called out in the tutorial:
Authenticity (e.g., IP address spoofing)
Accountability vs. Privacy
Confidentiality & Integrity
Availability (Distributed Denial of Service (DDOS) attacks)
While these areas of security would certainly be important for any new ground-up network technology design, solutions to many of these problems already exist in current networking technologies and the last decade has seen a wealth of investment in strengthening them.
However, that work is far from complete, and operates within the constraints of IP which itself was initially designed to operate in a benign environment. An interesting question that needs to be explored, but that is currently forbidden from consideration within the IETF concerns the design of a packet that would better address these valid, ongoing security issues in an inherent built-in way, instead of using an add-on approach. As is known, IPv4 and IPv6 had been designed before security features were added.
These add-on features provide some help to alleviate the problems, but they have not solved the fundamental problems. The wealth in investment in strengthening Internet security is, in significant part, a consequence of the existing design limitations of IP, and despite this investment, DDoS amplification attacks and phishing that exploits the ability to spoof Internet addresses and impersonate another sender remain huge problems, enabled in no small part by those limitations.
Another problem concerns the lack of control that users have over their data (and network traffic). These are not addressed by any effort in the IETF as far as I am aware. The IETF, with its limited success here; should allow for experimentation with and development of new approaches.
30) It is also important to understand the difference between defining a capability in a standard and deploying it in operational networks. For example, methods for authenticating users connecting to the Internet and detecting and preventing IP address spoofing have been defined in RFCs and available on equipment for years, but aren’t necessarily deployed in all networks.
I think that the authors of [SHARP] are overstating the capabilities of those solutions, but in any case they do not provide proof of the assertion. While solutions to some problems may exist, the fact remains that they require add-ons which needs user’s skill sets, introduces complexity and may give rise to their own set of second-order problems. New IP prefers a built-in approach instead of an add-on approach.
31) While it is easy to claim that all these capabilities are intrinsically part of any new network architecture, it is much harder to ensure that they are actually deployed in operational networks.
For example, while IPsec was included in the initial IPv6 specification [RFC1883], it has not been widely utilized especially in consumer markets. While a government can mandate deployment of a new network technology, such a mandate does not enhance global interoperability.
We are not objecting to the continued use of the currently deployed best effort Internet where it is sufficient. We are investigating the design of alternatives that will be better suited for cases where it is no longer sufficient and limitations are encountered. Whether it will be deployed or not fundamentally depends on economics, and that position has changed since IPv6 was first designed. And IPsec is still an add-on feature, not an integral part of IPv6 – that fact allows implementation of IPv6 without important security protection.
Furthermore I am not advocating any government position on New IP. It is my assumption that it will be a voluntary standard, just as IP is a voluntary one.
32) The proposal also doesn’t distinguish between those capabilities that mandate a new architecture vs. those capabilities that could theoretically be run over the current routing infrastructure.
Indeed, we are still in the early stages with this idea. Whether it is viable and successful remains to be validated in the future, but that cannot be taken as the reason for stopping this research.
33) For example, the proposal makes statements regarding the Public Key Infrastructure (PKI) Certificate Authority (CA) system relying on a single point trust anchor or vulnerabilities in key exchange. These are important points of discussion for any architecture, in fact they are being discussed in the relevant communities in the context of the current Internet infrastructure and don’t require a completely new architecture.
That is one view. While what you assert may turn out to be correct, there is no underpinning technical argument that leads directly to this conclusion; more research is required.
34) Finally, networking protocols face inherent trade-offs between openness and security. While lack of ubiquitous deployment of strict mandatory authentication can contribute to spoofing and denial-of-service attacks, it also contributes to the ease of users to connect and reap the benefits of the Internet’s global connectivity.
Also, network operators understand that mandatory authentication adds expense and complexity to network operations.
Indeed, there are trade-offs. At the same time, the assertion that users will have to choose between security and openness is very defeatist. While this may be true today, the goal should be to challenge the need for a choice and instead demand both. This is precisely why investigation of new approaches is needed rather than accepting the status quo. Where hard choices are necessary, users may in fact need to be empowered so that they can control those trade-offs.
Also note that mobile phones are now ubiquitous and they have a comprehensive user authentication system. Yet mobile phone network operators continue to embrace mandatory authentication and this will accelerate with billions of new connected devices over the mobile networks with every new cellular generation. Similarly, connecting billions of IoT devices is simply not practical without authentication of the devices, and establishment of their ownership
35) Ultra-high throughput, new transport architectures
C83 and its associated tutorials emphasize the need for ultra-high throughput to support future projected applications such as holographic communication. While the bandwidth required for support of such applications will be the subject of research and development over the next decade (e.g., ITU-T SG15 on optical transport, the IEEE P802.3bs Task Force on Terabit Ethernet), the proposal focuses on the need for a new transport architecture, including user-defined customized requests for network service and network-awareness of transport and application.
Indeed, but do we sequence or parallelize these investigations? As correctly said, SG15 is working on optical transport (Layer 1) and IEEE is on Ethernet (Layer 2). However, there are no efforts or initiatives on Layer 3 (Network layer, IP layer). New IP is trying to fill the gap by taking on this issue on Layer 3.
36) The tutorial presented in support of the proposal for work on a new transport contains specifics of the network protocol and network operation clearly oriented toward Huawei’s Big Packet Protocol [BPP] as opposed to laying out requirements indicating a need for a new transport. Huawei has submitted a contribution to SG11 to initiate studies on a new transport protocol [C322].
BPP is an interim solution proposal that offers unique features and addresses use cases and capabilities that were not supported before [BPP]. It is intended to show what can be possible if we challenge the way in which we currently think about networking. That said, it constitutes a starting point. It is a contribution to the discussion and invitation to engage further; it will surely evolve or be replaced as the discussion progresses further. It is fully the intent that when New IP is taken up by an SDO, a clear problem statement and laying out of requirements will have to precede the definition of solutions.
The “transport” should be taken as “moving information from one place to another” rather than a replacement of transport in TCP.
37) While TCP is the most widely used transport protocol on the Internet,
It is, but it was still insufficient for the needs of Google, who proposed QUIC, in hindsight acknowledged to offer genuine improvements that would not been made had we stuck with TCP. Many examples show that TCP/IP is being replaced by QUIC/IP, which could quietly move TCP into retirement.
38) There has been tremendous focus in recent years on performance improvements, most prominently with the development of the UDP-based QUIC protocol that is expected to become one of the most widely deployed transport protocols on the Internet.
Indeed, but of course this was also a proposal the IETF that was initiated and driven by an outside force. It was not an initiative that emerged from within the IETF, because the IETF seems to have lost the ability to do major rethinking on its core protocols. New IP can become another success story, much in the same way as QUIC, since the problems/issues/gaps are undeniably real.
39) The IETF continues its work on transport protocols in its Transport Area (tsv) to investigate new requirements and where it can take into account lessons learned from operation of the Internet.
In my view this work is insufficient. In the IETF development model TSV largely works in isolation, and the only influence it has on the network layer is the reuse of a few bits in IP protocols in a way that has to be backwards compatible. Indeed there seem to be some proposals that are fighting over the reuse of those bits, and sometimes even fighting over just one single bit. This, in turn, creates complexity in deployment. A revision to both the network and the transport layer is needed to meet some of the new requirements in harmony. New IP promises to provide more room in its design for its upper layer to use so that the network layer and transport layer will work in harmony.
40) The participants in the IETF’s Transport Area have years of experience in developing and operating transport protocols over the Internet. They take into account interaction with currently deployed protocols when investigating new protocols to ensure that new proposals have a viable deployment path and minimize harmful effects on the current Internet. Companies are encouraged to take advantage of this experience when making new proposals to avoid duplicative work streams.
As I noted above, the IETF TSV Area does not operate outside the narrow confines of an end-to-end model layered over a largely opaque IP pipe, and they did not lead the only significant advance beyond TCP. The system as scaled to the Internet with no explicit and more intervention from the opaque IP pipe is rather odd and seems limiting. Having more and explicit in-network help deserves to be explored. New IP intends to take this approach and provides more help to the transport layer. That being said, what works in the transport today will still work, and New IP will only provide additional, optional mechanisms.
41) Creation and deployment of a new protocol and network architecture in ITU-T as described in the tutorial is likely to create the same interoperability problems the proposal claims to want to avoid.
Thanks to [SHARP] for pointing it out. Interoperability is important for any new protocol development, and there is no exception with New IP. There have been billions and even trillions of dollars of investment in the existing infrastructure and this needs to be preserved until the end of its natural life. Thus when developing new protocols, we need to protect existing investments.
From Day 1 of the work developing New IP, interoperability has been its first requirement. Unlike IPv6, whose goal has been to obsolete IPv4, New IP does not anticipate the replacement of any existing protocols that are satisfactorily delivering against the needs of their users. New IP does not plan to change the Internet structure, nor its governance. Rather, the goal of New IP is to support upcoming applications and the envisioned applications in the future by using existing protocols and enhancing them as needed through progressive updates and evolutionary extensions. It will only develop new protocols or their components when there is a broad agreement that a new protocol is needed.
Emphasis will be given to providing extension mechanisms so new requirements and needs can be addressed and deployed in a rapid and agile manner. Moreover, when new protocols are being developed, they are required to interoperate with existing ones and be backward compatible. While IP has been very successful for the consumer use where “connectivity” is a key feature, New IP is aimed for the business-critical use in industrial machine-type communications with stringent performance targets. New IP upholds the “autonomy” of “autonomous systems” of the Internet. The economics and its balancing of cost and rewards is a key factor in its deployment.
42) In addition, networks will continue to migrate to IPv6 over the next decade, with the need to support pockets of IPv4 during that migration. Introducing a new protocol system that is not backward compatible or interoperable with IP (v4 or v6) would require the need for yet another decades-long migration, requiring tens of billions of IP-enabled nodes to interwork and interconnect with the new system.
I am deeply concerned with IPv6 when reading the above. Note the first IPv6 RFC was published in 1995 [RFC1883]. After 25 years of its development, It still needs another 10 years until full deployment. It makes 35 years in total, spanning practically the entire career of an engineer. Indeed, this is a big lesson to learn from IPv6.
43) Merely providing a variable-length address does not solve the problem. Creating a new protocol system to “solve” a perceived interoperability problem adds another interoperability problem and because of increased complexity likely adds security and resiliency issues as well.
Firstly, I want to say that “variable-length address” is still under research and would be used in a deployed area only when it is validated to be viable with respect to many requirements. Secondly, variable-length addresses are not New IP’s defining feature, and as such New IP’s success will not depend on this capability alone. Thirdly, one very nice feature that New IP will offer is Free-Choice Addressing. The operator can choose IPv4, IPv6, or any other addressing system that best serves its applications. In order to seamlessly connect OT networks to the Internet, New IP does not dictate the use of the addressing systems.
It was pointed out to me by a very senior IETF expert and activist that if the IETF had chosen ISO 8473 for IPv6 when designing IPv6, it would have been a lot closer to where it needs to be than it actually is. That is, of course, water under the bridge, but one question about making mistakes is how long you perpetuate them for.
44) Although these capabilities were implemented, trialled, and deployed in a limited manner on specific networks (e.g., enterprise), they were never rolled out in the Internet as a generally available service. The complexity and cost of deploying and operating such a service, especially across domains operated by different business entities, were significant reasons for lack of deployment on a global scale[PANRNT]. Any service that requires allocation of per-router per-flow resources is likely to run into similar obstacles[HUSTON].
[SHARP] did not point to Geoff Huston’s other work on the death of transit [DoT]. This describes a significant de facto change to the Internet architecture that has largely been ignored by the IETF. If we use that model as a base, we see that the difficulties of deploying these types of technology are reduced, and the possibility of rebalancing the network revenues between the OTT providers and the access network providers exists. Once we head in that direction we have to ask ourselves whether the older designs that you point to are the best that we can do, and whether IPv6 is the best conduit. Those are questions that we think need serious study. However, New IP by itself is not involved in such disputes.
45) Such prospective deployments tie into business agreements, the need to account and bill for usage of enhanced service, and the allocation of resources for the enhanced service that could be used for basic service. Those non-technical costs generally outweighed the benefits of enhanced services and are not addressed by C83 or its associated tutorials. Based on experience in operational networks, less fine- grained capabilities were developed (e.g., Differentiated Services (diffserv)) for traffic engineering.
There is a lot of assertion in those statements without considering the change in the Internet business model. If you have not done so, I urge you to look at Geoff Huston’s “Death of Transit” paper and take an open-minded look at both the implications and the opportunities. There is old wisdom that what was not possible in the past (due to technical, engineering, or economic reasons) may be possible today.
46) The IETF and others (e.g., IEEE, ITU-T SG15) have evolved their protocols to provide building blocks of mostly independent utility to address identified needs. This flexibility allows network operators to utilize those building blocks needed to provide the desired services. This allows the Internet to evolve to meet new challenges. RFC 5218 [RFC5218] provides general principles and case studies for success factors in developing new protocols.
Yes indeed. At the same time, the authors of [SHARP] fail to note the fundamental limitations of the 25-year-old internet protocol itself [RFC1883], whose discussion is considered off-limits, which imposes many hard-to-overcome limitations. We feel that it is time to ask what we might achieve if that restriction were lifted. Following this stream of thinking, there is absolutely no reason why New IP cannot be taken as a new building block where it is useful in some autonomous systems.
47) While it is tempting to develop an integrated “top-down” design of a global network architecture defining a completely new protocol system meeting all possible requirements, the end result of such efforts has usually been for network operators to pick out pieces of the architectures of most utility (e.g., ATM PVCs) and leaving the rest.
While that is true, you do need a number of components to deliver the enhanced capability and they have to work together to achieve the required end result. The term is used to describe the way to perform the work, and its synonyms include “vision-driven approach” or “goal-oriented approach”, which have proven, in my opinion, very successful in SDOs like 3GPP. The “vision” is often specified as “use cases and requirements”. It starts with a vision or a goal, which then is decomposed into a set of sub-goals or detailed requirements, and ends up with a solution as a result of collaborative work on sub-goals. “Top-down” by itself is just a methodology for performing some work. It is not a symbol of failure or mistake. And importantly, it has nothing to do with any physical deployment. Whether the approach to this is top down or bottom up or both is a matter of preference and practicality, and mostly of economics.
It is also true that no one could anticipate all possible exact requirements in the future. For this reason, we need to have extension hooks in place that allow us to accommodate different (and additional) features to avoid ossification. We need to embrace newer innovations that allow network engineers to tune network behaviour in ways that can be adapted to a broad array of requirements of new services and behaviour. These, in turn, can accommodate novel business accounting schemes, that support a healthier and sustainable economic ecosystem, as is being studied and proposed in New IP.
48) Decades of experience with the development of Internet protocols demonstrated the importance of the critical feedback loop between implementation, deployment, and protocol design. As draft protocols get implemented and tested, bugs and optimizations are discovered. Data is gathered that is then fed back into the design before it gets finalized.
The IETF embedded this feedback loop into the standardization process. At times dozens of independent implementations are being developed and deployed at scale prior to the standardization of a new protocol.
The benefits of feedback loops in the standardization process have been well understood in the New IP community. These undoubtedly greatly help to improve the final product from the initial proposal. This is precisely why we are looking to engage with SDOs. We are not looking for a “rubber stamp”. We are looking for feedback from implementation in an open and wider community, and want to engage such a community to jointly develop and improve the design. This is why we are putting such efforts into SDOs engagements.
49) A successful model for developing an overall architecture from some organizations (e.g. 3GPP) has been to identify the services and requirements and then work with the appropriate standards organizations to enhance existing protocols, or develop new ones if shown to be needed.
The 3GPP model is largely a top down approach, and New IP designers recognize its success and advocate this approach. It starts with a vision or a goal for the future, and ends with a holistic solution.
While it is important to take a long-term view and develop potential uses cases for future networking, it is also important to recognize that research topics are not generally appropriate for standards development. Technology should reach a sufficiently mature level of understanding before international standardization. For example, as stated in SG16’s response to the liaison regarding “New IP”, related to the proposed work on hologram communications [TD697]:
Given that the hologram is still in very early stage of research, SG16 does not have a technology base on the hologram. It is premature for SG16 to start the hologram-specific content delivery work.
I am happy to hear that [SHARP] agrees on the importance of taking a long-term view and developing potential uses cases for future networking. At the same time, [SHARP] seems to suggest that we should not initiate New IP work because SG16 has stated that it is premature for SG16 to start work on delivery of holographic contents, one of New IP’s potential applications in the year 2030 and beyond.
It is a misunderstanding by [SHARP] to equate New IP with hologram applications. Holograms are often cited as one (but not the only) example to show the need to deliver very high throughput, coupled with the ability for highly dynamic adaptation of data streams, for applications in the year 2030 and beyond. When new protocols being designed, the long-term use cases such as holographic-type communications should be envisioned and taken into proper consideration so that “internet ossification” would not hamper the support for such longer-term use cases once they are ready to become reality. This mindset will help ensure that new protocols will survive for the foreseeable future. New IP is aimed at supporting upcoming applications as well as applications envisioned in the longer term.
51) The studies underway in the FG NET-2030, once completed and analyzed by the Study Groups, might provide direction for research and development of technologies and identify areas to monitor for future standardization in the appropriate venue. While some of its work might be used to provide direction for research, they won’t necessarily provide a basis for standardization of protocols. As mentioned previously, the IRTF has research groups already engaged in some of the areas identified by FG NET- 2030.
Sadly, despite our best efforts the IRTF is not interested in researching new network layer protocols, or even examining the merits of the existing ones against new application requirements. I am concerned that [SHARP] takes a similar approach that the IETF does not want to take a fundamental look at its key technologies, and through [SHARP] and the IETF’s liaison response, the IETF is trying to ensure that no other SDO does it either.
52) From its inception, the Internet was designed to interconnect heterogeneous networks. The alleged challenges mentioned in C83 have been addressed, or are currently being addressed, in organizations such as IETF, IEEE, 3GPP, ITU-T SG15. Creating overlapping work is duplicative and costly. In the end, it does not enhance interoperability.
That is an assertion that does not really sustain close scrutiny, as discussed above. While it is true that the Internet was designed to interconnect heterogeneous networks, autonomy and heterogeneity in autonomous systems are lost through the IETF’s effort on homogenization by using “one size fits all” approach. This is leading to so-called “consolidation” and “ossification” [GIR2019] [MN]. While Internet consolidation is reducing opportunities for market entry and competition, Internet ossification interferes with the ability to address new needs and adapt to new requirements in an acceptable time scale. This, in turn, does no good to the purpose of the Internet to serve people, the economy, and industry. On the other hand, while the existing IP, as a general-purpose network layer protocol, has been successful in “connectivity” for consumer use, there is no evidence to prove that it is the best candidate for industrial uses that often require stringent business-critical performance metrics. As discussed earlier, 3GPP being in Radio, IEEE in Layer 2, ITU-T SG 15 in optical, there is a need to solve the identified problems in the network layer, which is what largely New IP is aimed at doing.
53) Proposals for new protocol systems and architectures should definitively show why the existing work is not sufficient. Creating a new protocol system will require yet another expensive migration effort on top of the current migration to 5G, NGN and IPv6. Member States should consider sunk cost, investment protection, and compatibility with the embedded base.
Indeed we should only do this if there is a long term and significant economic benefit. Can I take it that, if such benefit were demonstrated, ISOC would wholeheartedly provide its support even if there was no corresponding support in the IETF? New IP is not a project for trick-or-treat; it is motivated for good industrial and economic reasons.
54) The studies underway in the FG NET-2030 could also provide direction for research and development of technologies for monitoring to determine the need for standardization. It would be premature to start work on new protocol systems before the FG NET-2030 completes its work and the Study Groups have had a chance to analyze it. That analysis should consider current efforts and architectures.
This is largely a distraction. Firstly, New IP started much earlier than Network 2030; secondly, New IP does not solely depend on Network 2030 [TD757]; thirdly, the use cases and network requirements that may be related to New IP have already been published [UC2030]; fourthly, New IP is proposed for study from the year 2021, when FG NET-2030 has already terminated its current life cycle; fifthly, research could hardly be linearized, and from the project management’s point of view parallel execution leads to faster results than sequential execution; and lastly, some network operators and industrial manufacturing-related companies have expressed their interest in New IP at the Layer 3 other than optical solution in Layer 1 or TSN solution in Layer 2.
55) Consideration of a new protocol system must take into account the embedded base of equipment and operational systems supporting the multi-billion dollar global online economy.
Indeed, New IP intends to provide an opportunity for the network operators to rebalance revenues by providing advanced network services and creating new business models from it. A healthy win-win-win business eco-system, from the front end (client/application) to the network to the back end (service), is always a good direction to go.
56) Developing a new protocol system is likely to create multiple non-interoperable networks, defeating one of the main purposes of developing the new protocol architecture.
We need to properly understand the existing deployment model and the trajectory it is on before making that assertion. That is why I consider that understanding Huston’s seminal work on the death of transit [DoT] is fundamental to understanding the network requirements in the 2030 timeframe.
57) A better way forward would be to allow the FG NET-2030 to complete its work, review the use cases developed as part of the Focus Group’s outcomes and encourage all parties to further those, as far as they are not already under investigation, in the relevant SDOs.
I mostly agree, but think it needs something more fundamental. While we do need FG Network 2030 to complete, we need to also fundamentally review whether IPv6 as it stands can get us to where we need to be, or whether we need to make fundamental changes. I agree that technology religion should be left to one side and an open, objective, and fundamental review of how we should best deliver those requirements be undertaken. I agree that such fundamental review should be added, if not yet, as new sub-tasks to the proposed question for the next study period.
The existing IP has been successful in consumer domains where “connectivity” is the central goal. It has fundamental limitations when used to support, for example, business-critical applications with stringent performance metrics as commonly found in industrial domains. New IP, therefore, deserves to be an option for network operators and application developers. With New IP, more networks and terminals in the OT domains can be connected to the Internet. While keeping compatible with the existing IP, New IP complements the capabilities and services provided by the existing IP through optimizations, extensions and improvements.
New IP has come from being a small project to what is now a multi-party community project with participation from many organizations throughout the world. Some network operators and industrial manufacturing-related companies have shown their interest in New IP. Even though there have been some Proof-of-Concept (PoC) implementations and publications on New IP or its components, there is no official standard, and no Standards Developing Organizations (SDOs) have yet accepted New IP as a starting point for standardization.
As is well recognized, the Internet is a network of networks that work around the world as if it were one, and every autonomous system should be by its definition autonomous and independent. ISOC should uphold the basic principles that has made the Internet thrive over the last 40 years: “autonomy” in Autonomous Systems, “independence” in its operations, “openness” with respect to everyone everywhere, and “freedom” when building and connecting networks and applications. New IP is such an additional choice that complements the capabilities of existing ones in order to address their limitations, while being interoperable with them.
As noted above, some of the statements, assertions, and opinions in [SHARP] are misleading, over-stated, speculative, or insufficiently justified. The conclusions of [SHARP] do not do justice to New IP, and in turn they will do harm to the future evolution of the Internet and to the convergence of IT and OT. Therefore, [SHARP] should not be used as ISOC’s official position. Instead, ISOC should take the above notes and comments into consideration, and should welcome, encourage and support all efforts, including New IP, that make the Internet better serve people, the economy and industry. ISOC should take this position regardless of whether such efforts are made within the IETF, ITU-T, or elsewhere. ISOC should thus encourage SDOs to initiate the standardization of New IP.
As Wayne Dyer said, “if you change the way you look at things, the things you look at change.” I ask the authors of [SHARP] and the whole community to change the way they look at IP, where they will find there is a need for innovation. I then ask the authors and the whole community to change the way they look at New IP and I hope that they can see its values and merits.
[SHARP] H. Sharp, O. Kolkman, An Analysis of the “New IP” Proposal to the ITU-T, 2020
[C83] “New IP, Shaping Future Network”: Propose to initiate the discussion of strategy transformation for ITU-T, TSAG-C83R1, Geneva, 23-27 September 2019
[TD757] TSAG Information Session on Network 2030, ITU-T TSAG-TD757, Geneva, February 12, 2020
[JRACM] J. Rexford, Networks Capable of Change, Keynote Speech, ACM Sigcomm 2018, Budapest, 2018
[BPP] R. Li, K. Makhijani, H. Yousefi, C. Westphal, L. Dong, T. Wauters, and F. D. Turck. A framework for qualitative communications using big packet protocol. ACM SIGCOMM Workshop on Networking for Emerging Applications and Technologies (NEAT’19), 2019.
[NIP], R. Li, New IP and Market Opportunities, Keynote Speech, IEEE International Conference on High Performance Switching and Routing (HPSR 2020), 2020
[NDPF] R. Li, K. Makhijani, L. Dong, New IP: A Data Packet Framework to Evolve the Internet, Invited Paper, IEEE International Conference on High Performance Switching and Routing (HPSR 2020), 2020
[C322] T17-SG11-C-0322. Source: Huawei Technologies. Propose new research for next study period: the New Transport Layer (Layer-4) Protocols. Geneva, 16-25 October 2019.
[HUSTON] Huston, G., “The QoS Emperor’s Wardrobe”. The ISP Column, 2012-06. <https://labs.ripe.net/Members/gih/the-qos-emperors-wardrobe>
[DoT] G. Huston, The Death of Transit and the Future Internet, Keynote Speech at 2nd ITU-T Workshop on Network 2030, Hong Kong, Dec. 2018
[MN] M. Ammar, Service-Infrastructure Cycle, Ossification, and the Fragmentation of the Internet, Keynote Speech at 3rd ITU-T Workshop on Network 2030, London, UK, Feb. 2019
[GIR2019] C. Bommelaer de Leusse, Carl Gahnberg, The Global Internet Report: Consolidation in the Internet Economy, ISOC Report, February 26, 2019
[UC2030] Representative use cases and key network requirements for Network 2030, ITU-T Focus Group on Network 2030, https://www.itu.int/pub/T-FG-NET2030-2020-SUB.G1, 2020
[PANRNT] Dawkins, Spencer, “Path Aware Networking: Obstacles to Deployment (A Bestiary of Roads Not Taken)”, draft-irtf-panrg-what-not-to-do-07 (Work in Progress), January 2020, <https://datatracker.ietf.org/doc/html/draft-irtf-panrg-what-not-to-do-07>.
[RFC1633] Braden, R., Clark, D., and S. Shenker, “Integrated Services in the Internet Architecture: an Overview”, RFC 1633, DOI 10.17487/RFC1633, June 1994, <https://www.rfc-editor.org/info/rfc1633>.
[RFC1883] Deering, S. and R. Hinden, “Internet Protocol, Version 6 (IPv6) Specification”, RFC 1883, DOI 10.17487/RFC1883, December 1995, <https://www.rfc-editor.org/info/rfc1883>.
[RFC5218] Thaler, D. and B. Aboba, “What Makes for a Successful Protocol?”, RFC 5218, DOI 10.17487/RFC5218, July 2008, <https://www.rfc-editor.org/info/rfc5218>.
[TD598] TSAG-TD598, Source: Director, TSB, “Tutorial on C83 – New IP: Shaping the Future Network”. Geneva, 23-27 September 2019.
[TD697] TSAG-TD697, Source: Study Group 16, “LS/r on new IP, shaping future network (TSAG- LS23) [from ITU-T SG16]”, Geneva, 10-14 February 2020.
[TSN] IEEE Time-Sensitive Networking Task Group: https://1.ieee802.org/tsn/
Standards Groups Mentioned
Broadband Forum (BBF): https://www.broadband-forum.org/
3rd Generation Partnership Project (3GPP): https://www.3gpp.org
Institute of Electrical and Electronics Engineers – Standards Assocation (IEEE-SA): https://standards.ieee.org
International Telecommunication Untion – Telecommunication Standardization Sector (ITU-T): https://www.itu.int/en/ITU-T/studygroups/2017-2020/Pages/default.aspx
ITU-T Study Groups (Study Period 2017-2020)