Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support ICE Consent Freshness (RFC7675) #1127

Closed
penguinol opened this issue Jul 24, 2023 · 16 comments · Fixed by #1332
Closed

Support ICE Consent Freshness (RFC7675) #1127

penguinol opened this issue Jul 24, 2023 · 16 comments · Fixed by #1332
Labels

Comments

@penguinol
Copy link
Contributor

penguinol commented Jul 24, 2023

Feature Request

https://datatracker.ietf.org/doc/html/rfc7675

In both WHEP and WHIP, RFC7675 is used to check the abnormal shutdown clients, so we can remove them on server.
In short, RFC7675 add a timer to check whether the ice server received binding request in last 30s, if not, terminal it.

@penguinol
Copy link
Contributor Author

It seems RFC7675 is not suitable for ICE-Lite.
But we need a way to detected unexpected closed clients.

Here is some similar issues in pion.
pion/ice#288
pion/webrtc#2045

@ibc
Copy link
Member

ibc commented Aug 24, 2023

We don't have a way to detect unexpected closed plain RTP clients. Such a thing doesn't even exist in RTP protocol. Why should we care about WebRTC (ICE) unexpected closures? This can be easily done at application level by implementing a keep alive mechanism in which the server sends "ping" messages over DataChannel to a client and the client must reply with "pong".

We should not invent things that do not exist in the RTC specs.

@ibc
Copy link
Member

ibc commented Aug 24, 2023

The only idea coming to my mind would be this:

  • Once ICE+DTLS is established in a WebRtcTransport, mediasoup sends periodic DTLS messages to the client and expects a "response" in return.
  • But I don't think those DTLS messages exist in the DTLS spec and I don't even thing we can trigger them using OpenSSL.

@penguinol
Copy link
Contributor Author

penguinol commented Aug 24, 2023

Because WHEP and WHIP are using HTTP as protocol, and has not define keep-alive message. The all use RFC7675 to do this.

https://datatracker.ietf.org/doc/draft-murillo-whep/

Once a session is set up, ICE consent freshness [RFC7675] SHALL be used to detect abrupt disconnection and DTLS teardown for session termination by either side.

https://datatracker.ietf.org/doc/draft-ietf-wish-whip/

Once a session is setup, ICE consent freshness [RFC7675] SHALL be
used to detect non graceful disconnection and DTLS teardown for
session termination by either side.

@ibc
Copy link
Member

ibc commented Aug 24, 2023

Because WHEP and WHIP are using HTTP as protocol, and has not define keep-alive message. The all use RFC7675 to do this.

Yes, but mediasoup is ICE Lite so it cannot send ICE Binding requests to clients, and even if we sent them we couldn't reliably expect ICE answers from those clients since, as per spec, they are not mandated to reply to ICE requests received from ICE Lite endpoints (mediasoup).

ibc added a commit that referenced this issue Feb 15, 2024
- Fixes #1127

### Details

- This PR provides the app with the ability to detect downed networks or silent/abrupts disconnections in client side.
- Make mediasoup send ICE consent requests every ~5 seconds once ICE is established.
- Move to 'disconnected' state (and notify the app as usual) if no ICE response is received in 30 seconds as per spec (this may change with a setting, see TODO below).
- This feature requires providing the mediasoup `WebRtcTransport` with the remote ICE username fragment and ICE password, which will be done by latest mediasoup-client once released.

### TODO

- [ ] Add a `iceConsentTimeout` parameter to `WebRtcTransportOptions` to override the default value of 30 seconds (0 means "disable the mechanism").
- [ ] Rust.
- [ ] React to 403 ICE error responses by also moving to 'disconnected' state.
@ibc
Copy link
Member

ibc commented Feb 15, 2024

@penguinol you want to see this :)

#1332

@ggarber
Copy link
Contributor

ggarber commented Feb 19, 2024

@ibc What is the benefit of implementing sending ICE Binding requests MS->clients instead of relying on missing ICE Binding requests clients->MS to detect when the connection is disconnected?
It looks like a more common approach and more efficient but maybe I'm missing something. (I have commented it with other people and shared the same comment/concern).

@ibc
Copy link
Member

ibc commented Feb 19, 2024

What is the benefit of implementing sending ICE Binding requests MS->clients instead of relying on missing ICE Binding requests clients->MS to detect when the connection is disconnected?

Client->MS periodic ICE consent mechanism is not mandatory or there could be ICE clients that do not implement it. We cannot assume that all clients implement it.

@ggarber
Copy link
Contributor

ggarber commented Feb 19, 2024

All endpoints MUST send keepalives for each data session. These
keepalives serve the purpose of keeping NAT bindings alive for the
data session. The keepalives SHOULD be sent using a format that is
supported by its peer. ICE endpoints allow for STUN-based keepalives
for UDP streams, and as such, STUN keepalives MUST be used when an
ICE agent is a full ICE implementation and is communicating with a
peer that supports ICE (lite or full).

https://datatracker.ietf.org/doc/html/rfc8445#section-11

@ibc
Copy link
Member

ibc commented Feb 19, 2024

What is exactly wrong with being explicit and sending requests by ourselves? Or is it just that since there is an alternative legitimate approach then we have to undo the ongoing work?

@ibc
Copy link
Member

ibc commented Feb 19, 2024

The funny thing is that you are right and we could just run a timeout and report disconnection if no consent request was received in the last 30 seconds. HOWEVER the ongoing PR of this task has made a underlying bug to show up (maybe the server side timeout approach you suggest would also make it show up) so I want to fix that first. It's explained in the PR.

@ibc
Copy link
Member

ibc commented Feb 19, 2024

@ggarber I hate you. I'll refactor the PR to follow your proposal.

@fippo
Copy link
Contributor

fippo commented Feb 19, 2024

The argument I've seen for doing a STUN ping from server to client anyway is 💩 enterprise firewalls which let outgoing traffic through but block all incoming traffic.

@ibc
Copy link
Member

ibc commented Feb 19, 2024

The argument I've seen for doing a STUN ping from server to client anyway is 💩 enterprise firewalls which let outgoing traffic through but block all incoming traffic.

So that's a reason to rely on consent from client to server rather than from server to client, right? And I don't understand anyway. The NAT is open, otherwise there couldn't be incoming and outgoing ICE (requests or responses, DTLS, RTP, etc). So I don't get this argument at all.

And honestly I'm redoing this right now and it's being pain so I prefer no not discuss about this aeternum. I just assume that relying on consent requests sent by client is valid for ALL network scenarios.

@fippo
Copy link
Contributor

fippo commented Feb 19, 2024

No, that would be a reason to do what you proposed in addition to what @ggarber proposed. Such "enterprise" "firewalls" (quotes intentional!) are sometimes breaking enough stuff get get to a state where the DTLS handshake completes (over a different path which works in both directions such as TURN/UDP) but then the higher priority "direct" pair takes over sending from the client but the reverse channel is broken.

@ibc
Copy link
Member

ibc commented Feb 19, 2024

I promise I'm not gonna implement consent check in both directions (AKA sending server side requests plus checking received requests from clients).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging a pull request may close this issue.

4 participants