On Ajaxian a post recently from Bikin Chiu of the Gmail Mobile team was talking about reducing latency and code loading. I didn’t have the time to compose a great and well edited response to the post. So expanding on what I wrote earlier, I continue.
I think our download size and code execution speed is decent. The problem we should look at now is not only reducing latency via reduces HTTP requests but by also having larger pipes and more nodes clustered around high traffic hubs. The internet was supposed to be location agnostic but it is clearly not so in prime time web applications. WeÃ¢â‚¬â„¢re still using methods that we could have come up in the 90Ã¢â‚¬â„¢s. WeÃ¢â‚¬â„¢re not dealing in the 90Ã¢â‚¬â„¢s anymore, browsers are new and fast, clean and powerful but our network is seriously underpowered and ready for an overhaul. And not just our network in the United States but across the world, we need a new backbone, or perhaps better yet, we need to make a completely new type of foundation centerpiece.
Nevertheless, it’s great that the Gmail Mobile team made loading performance ten times better by reducing their HTTP requests. My blog asks for a horrid amount of resources because of my theme and various plugins. My blog is simple, Gmail on the other hand is a monster of a web application so I can only imagine the difference it really makes. I recently read that Mootools lazy loading via client and server side methods. This is what many people have been asking for. Once it was clever to have such modularity. Now, it’s not much of a big deal.
Latency will continue to grow unless we make some changes outside of software and focus on hardware for a moment. I’m scared of hardware, many are, but that is where I believe our bottleneck is.