Aquw VettelS 776 wrote:
I find it hard to believe that simply modifying the hardware in a router can have such drastic effects. Surely this must have been known before- electronics and technology companies spend millions of R&D, and now some guy at a university has just stumbled across something that's been under out noses for years? There must be a reason why this doesn't already exist, at least, not as far as the consumer is concerned- whether it's financial or technological problems. This simply must have been known about before.
That's true, I read
another article on it a few days ago. It says some companies already use this technique of reserving wavelengths of light. I'm not sure how PCWorld managed to write a longer article containing less facts.
The new technology they fail to mention in the PCWorld article is a new protocol to allow those reserved wavelengths to be changed more quickly to adapt to how much traffic is coming through. This would mean that the data would not have to be stored in RAM on the router while it is waiting for something to be received since it can't both send and receive on the same wavelengths of light that are reserved for the particular client it is serving.
Now a new protocol could normally just be done with updated firmware, but the real innovation here is that the data is never stored in RAM. It will just be a flow through, switching the wavelength of it to something not being used on the other end. This would mean that current routers would have to be replaced with new hardware that can do this. Though, you are still correct that it does sound a little fishy that no one thought of modifying hardware like this before.
"We have these reserved wavelengths that aren't being used to capacity and are wasting us money, lets just leave them how they are" o.OIf all ISPs updated their hardware to this, we would see a small improvement in response times of websites that are far away from where you live because of the large number of hops through various routers. Say it could save 10ms per router on average (that's probably a really high estimate but i don't really know), after 20 hops, your savings in time would be 0.2 seconds, something that would be noticeable. 1000x faster might sound really fast but it's just a relative number with no meaning when we don't know the actual current values. 10ms to 10µs is 1000x faster, but not at all noticable to humans in 1 hop.
The main bottleneck of the internet right now is at the end of the hops, the server you are accessing. The processing time on servers of generating a page and sending it is often more than the time it takes your connection to reach the server. Processors need to get faster, hard drives need to get faster/bigger, and servers need more RAM to store more commonly used data in it. Right now, it is possible to make a really fast server with tons of ram, and lots of solid state drives, but the costs of that are too high for the average website owner. The huge companies can do this, but because of the massive amount of data they have, it is split across many servers and it has to compile all this data together so it can still be slow.
Online Gaming has the most potential to get improvements from this, but since games have to be optimized for the average internet user, there will be no improvements anytime soon. There are two types of hosting used for gaming right now, dedicated servers and peer hosting. Dedicated servers are typically better because everyone has to connect to them and all actions are handled equally. While with peer servers, one person from the group is hosting and everyone else connects to them, so the host has a noticeable advantage because there is no delay waiting for the actions to reach the hardware that processes the responses. However, both of these types of hosting have to cater to the person with the worst connection playing and
compensate for them. There are too many people out there right now that still have connections that were considered awful in 2001, never mind now how awful they would be considered now.
I think the best type of hosting for gaming would be distributed peer hosting where no one person is host, everyone would be a "host". All information would be sent from everyone, to everyone and checked with everyone. I imagine this would be really difficult to program, but I think it would eliminate most types of lag and cheating because of all clients evaluating the data that is sent to them. The amount of bandwidth required for this type of setup would be far too much for a majority of the internet population at the moment though.
ISPs will not upgrade because they don't have to while still making a huge profit from the massive expansion of the number of users on the internet. Until people start demanding better service by switching to better providers, we will see no change in the state of the internet. The average user is completely happy with the current
series of tubes checking their email, facebook, and watching youtube videos.
As a sidenote, none of what I said above about gaming applies to RuneScape. Any lag you see with RuneScape is not a result of your connection, it is the result of the low processing rate of actions. RuneScape has a server tickrate of just under 2 per second so any action you do could take up to 0.6 seconds to be reflected to others (and yourself since the game waits for a response from server before showing the action to you). Compare this with online shooters that run with a tickrate of over 50 per second, and in most cases for good servers up to 100. But comparing 2000 players to 60 players, it's understandable.
