Sunday, March 16, 2014

Why is mobile web browsing slow


    We all know browsing web via wireless is slow and we think we know why: latency introduced by the wireless network.  But the latency is at most about 150ms,  so on the surface it shouldn't affect the overall time that much.  Why do we feel an obvious slow down?

    I did an experiment and dived into the network packets for a HTML page download.  Turned out it was something that I was not aware of - TCP congestion control.

   The experiment was done by browsing a test web page on my server from my android phone (AT&T).  First,  the WiFi was turned off  so that my cell phone only uses the wireless connection.  Then I started wireshark, the famous packet sniffer,  on the server side.   The server is a libevent based server running on Ubuntu 12.04 server with the kernel version 3.2.0-59-generic.   The test was done when the (dedicated) server has very little load (<1% CPU).

   I grabbed the TCP session for downloading a HTML page and summarized the packets in the following.  The IP address of the server was changed to 11.11.11.11 to protect my server's identify (don't want others to test my server :-)).  To make it easier to read, I removed the TCP-ACK packets.   So other than the first two packets, all the others are TCP data packet.


 Time(Sec) Source IP    Destination IP 
 0.000000 107.77.66.120 11.11.11.11    TCP SYN
 0.000052 11.11.11.11 107.77.66.120    TCP SYN_ACK
 0.046860 107.77.66.120 11.11.11.11    HTTP GET /TestWebPage
 0.047456 11.11.11.11 107.77.66.120    data
 0.047484 11.11.11.11 107.77.66.120    ...
 0.093891 11.11.11.11 107.77.66.120  
 0.093901 11.11.11.11 107.77.66.120  
 0.094011 11.11.11.11 107.77.66.120  
 0.140421 11.11.11.11 107.77.66.120  
 0.140509 11.11.11.11 107.77.66.120  
 0.186898 11.11.11.11 107.77.66.120  
 0.233355 11.11.11.11 107.77.66.120  
 0.233463 11.11.11.11 107.77.66.120  
 0.279821 11.11.11.11 107.77.66.120  
 0.326314 11.11.11.11 107.77.66.120  
 0.326419 11.11.11.11 107.77.66.120  
 0.372772 11.11.11.11 107.77.66.120  
 0.372883 11.11.11.11 107.77.66.120  
 0.419273 11.11.11.11 107.77.66.120  
 0.419378 11.11.11.11 107.77.66.120  
 0.458092 11.11.11.11 107.77.66.120  
 0.466553 11.11.11.11 107.77.66.120  
 0.466571 11.11.11.11 107.77.66.120  
 0.513158 11.11.11.11 107.77.66.120  
 0.513180 11.11.11.11 107.77.66.120  
 0.514254 11.11.11.11 107.77.66.120  
 0.559861 11.11.11.11 107.77.66.120  
 0.559893 11.11.11.11 107.77.66.120  
 0.560944 11.11.11.11 107.77.66.120  
 0.606500 11.11.11.11 107.77.66.120  
 0.606517 11.11.11.11 107.77.66.120  
 0.606523 11.11.11.11 107.77.66.120  

    Looking at the list of packets and their timing,  I am reminded of driving on a congested road - stop and go and stop:
  • at time around 0.047, it sent 2 packets, 
  • at time around 0.093, it sent 3 packets
  • at time around 0.140, it sent 2 packets
  • ...
    Why can't the server keep sending data to the client?  First, there is the TCP window,  it determines how much data a host can send without getting acknowledgement. In our case, the TCP window size started with 14848 and it growed over time.  If this were the only limitation,  the server should be able to send 10 data packets in a burst. Why did the server only send 2 or 3 packets at a time?  Turned out it was also limited by the TCP congestion window.  Typical TCP congestion window will allow a host (the server in our case) to send 2 full size TCP data packet in the beginning,  it will grow or shrink (if there are dropped packets) over time. 

   So the congestion window on the server side is obviously a more severe limiting factor.  I am sure there are people who don't like it, but it's there for a reason: congestion control.  When there is a network congestion, it will keep all the TCP sessions moving, albeit a little slowly. This is much better than keeping sending packets to make congestion worse and no sessions are moving.


    The experiment was carried out at a time when there is not much wireless usage (early Sunday morning) and hence the latency is pretty low,  the round trip time between server and my mobile phone is only 46ms.  However downloading the entire HTML page still took about 600ms, which is more than 12 times the round-trip time.

  In normal operating hours, the round trip time between mobile device and server can be 200ms or more and the HTML page downloading can take more than 2.4 seconds and it could be slowed dow even further by :

  • dropped packets
  • SSL handshake
   On top of that,  thinking about the other works a mobile browser has to do:  download the javascript files, css files and yes, image files.   Now I start to understand why mobile browsing can be much slower.

   After this experiment, I not only have a better understanding of why mobile internet browsing is slow,  I appreciate the network sniffer, wireshark,  even more.  I hope wireshark becomes a good friend of every performance professionals. 

No comments:

Post a Comment