Debugging ICE in WebRTC

 Highlights

 Impact on my application

 Standardization status

 Details

 

 Highlights

In our last live WebRTC Q&A session we talked about the virtual face-to-face standards meeting held on February 25th (minutes are here). This virtual standards meeting was focused on ICE, Interactive Connectivity Establishment, the algorithms and protocols used to figure out functional network paths for real-time traffic in the face of routers and firewalls.

2 topics were at the center of the discussion:

  • More info for debugging ICE failures
  • A new end-of-candidates indicator and understanding when candidate pair checking is complete

This post is about the first topic of ICE debugging.

A non-technical but important point about this particular virtual standards meeting was that discussions actually finished early. This is not a common case as usually there are different opinions and a lively discussion in which reaching a consensus is typically not trivial. When agreement is reached more easily in a standards discussion, it is typically a sign that the standard is maturing and progress will accelerate.

 

 Impact on my application

Once finalized and implemented, debugging ICE failures will be easier.

 

 Standardization status

Pull requests exist and are being applied to the specification as they get reviewed.

 

 Details

At the recent WebRTC ICE-focused videoconference, the standards folks discussed the need for more debugging info in the specifications, specifically around ICE, so the discussed items were really just a loose set of areas where more info was desired.

The list of statistics available in the statistics draft does not match all of the properties in the WebRTC specification, the latter having grown organically.  One area with obvious differences is the RTCIceCandidateAttributes dictionary in the stats specification that somewhat corresponds with the RTCIceCandidate interface in WebRTC.  The plan now is to align the two documents.  There is some disagreement about how to do this, but the goal of alignment is at least agreed upon.

There are some new statistics that were agreed needed to be added.  Two in particular that were discussed were:

  • Providing a running sum total of the round trip times for STUN checks for each candidate pair.  It is then possible for the JS code to compute averages over any time period as well as histograms.
  • We want total counts, per candidate pair, of the consent checks sent and the responses received.  We *may* also want to get counts for the checks sent from the other side since it can help clarify what the other side is trying.  With these counters it is possible to determine ongoing loss rates for STUN consent checks, something which could be really useful in debugging.  All names for these counters are, of course, TBD.

Another area singled out for improvement in debugging and error handling was the error information received during the gathering and connectivity check phases.  Interestingly, there wasn’t much support for error codes (error reasons) when a connectivity check failed — they just wanted to know that a failure occurred, something already possible via the iceConnectionState property.  However, the specification already allows for error codes for gathering errors (errorCode in icecandidateerror) but doesn’t define the allowed list.  That will be added.