Large amounts of data in EAP-TTLS

Alan DeKok aland at deployingradius.com
Mon Nov 19 21:57:52 EST 2007


  I've been investigating using EAP-TTLS as a transport protocol for
other data (don't ask...), and ran into some issues.  When the server
(FreeRADIUS) sends a few hundred bytes inside of the TTLS tunnel,
eapol_test thinks that the data is truncated.

  After some investigation, I found that eap_ttls_decrypt(), eap_ttls.c
allocates the decryption buffer based on the size of the encrypted data:

	...
	if (data->ssl.tls_in_total > buf_len)
		buf_len = data->ssl.tls_in_total;
	...

  It then decrypts the input data into that buffer.  This works in many
cases, but presumes that the decrypted data is small, and almost the
same size as the encrypted data.

  If compression is enabled in the TLS session, then the decrypted data
could be much larger than the encrypted portion.  Of course, my tests
ran into this, because I was sending large amounts of identical data as
padding, just to test the system.

  If I force "buf_len = 8192", just to be wasteful, there is enough room
for the decrypted data, and my tests proceed as expected.

  Is there a correct way to fix this?  Setting the phase2 buffer size
large works, but isn't optimal.  An alternative would be to keep reading
the SSL context until there's no more data, but that would require
addition code to handle subsequent memory allocation.

  Comments?

  Alan DeKok.



More information about the HostAP mailing list