ID:181505
 
I'm pretty good with computer software. But hardware and services and such are out of my domain, so excuse my ignorance.

Wireless-G routers offer a maximum bit rate of 54Mbps. Wireless-N routers commonly offer a maximum of 150Mbps or 300Mbps, though the specification itself supports a maximum of 600Mbps.

That's all very good. It really is. But my question is this: why? Where does one even find and ISP that gives you those kinds of speeds? I know the ISP I'm with currently---ZoomTown1---certainly doesn't. I can find two packages from ZoomTown: One that offers 5Mbps down/769Kbps up, and one that offers 768Kbps down/384Kbps up.

So given that my family has always been poor, it didn't surprise me that we were paying for slow high-speed Internet. I wanted to see what providers offered speeds of 54/150/300/600Mbps, and how much those cost. The thing is, my search turned up zero results. As an idea, http://performance.toast.net/fastestisps.asp turned up Rogers as the fastest average ISP, averaging at about 4.7Mbps down. That's only an average, so even assuming 100% variance, the cap would be at 9.4Mbps.

I then went to look over at the big guys at Verizon, after hearing so much about their fiber-optic FiOS offerings. However, even with their "Tier-3: Fastest" plan, the cap is at 50Mbps. This is a significant improvement, and I may consider going for a tier-1 or tier-2 line when I have my own place, but even the tier-3 at 50Mbps doesn't touch the 54Mbps cap of Wireless-G.

I'm trying to figure out exactly where these routers that offer upwards of 54Mbps connectivity come into play for residential customers. What possible benefit could there be to splurging on a Wireless-N router for 150/300/600Mbps if your ISP is only offering you a small fraction of that amount to begin with?

If I'm being completely ignorant of some obvious information, I apologize. I'm not the brightest techie in the world, but this has been bothering me for a couple of days now.

1 I did NOT encourage the switch to ZoomTown from RoadRunner.
Kuraudo wrote:
I'm trying to figure out exactly where these routers that offer upwards of 54Mbps connectivity come into play for residential customers. What possible benefit could there be to splurging on a Wireless-N router for 150/300/600Mbps if your ISP is only offering you a small fraction of that amount to begin with?

You'd have more bandwidth to any of your computers (and other equipment) within your LAN. You might also live in europe, korea, or japan, where it's possible to get much higher speeds residentially than in north america.
It's a scheme to get more money. I just got a new computer with a wireless-N adapter that is not compatible with a/b/g so I was forced to upgrade my router.
As Skyspark said, these speeds are for running over your local network. This is nice for people who share files across computers, especially with the increasing popularity of HTPCs and home file servers.

You're going to have a hell of a time finding any ISP who comes close to what your router can push. Keep in mind these standard aren't strictly for home users, either; for better or worse, mid-size and large businesses with a busier local network do run wireless.

Recently, Google has announced they'll be running an "experiment" in which they will provide gigabit fiber-to-the-home connections in one or a small few select cities across America. It'd be nice to see some real speed competition to ISPs who haven't really done much in terms of speed in a decade.
Kuraudo wrote:
I'm pretty good with computer software. But hardware and services and such are out of my domain, so excuse my ignorance.

Nothing wrong with asking questions.

Wireless-G routers offer a maximum bit rate of 54Mbps. Wireless-N routers commonly offer a maximum of 150Mbps or 300Mbps, though the specification itself supports a maximum of 600Mbps.

There are more differences between G and N, I will discuss this in a moment.


OK: I decided to scrap the individual responses and go with one large explanation.

For one, as has been mentioned, higher LAN speeds are very helpful for those sharing files.

Basically file transfers are the only situation where you will need the higher end LAN speeds in a 'typical' residential situation. Some server applications could have lots of LAN traffic, but the likelyhood of a normal person needing stuff like this is slim.

SO:
Wireless N is not only faster than Wireless G and its predecessors, but it is more powerful.
Meaning: An N router will out-range a G router by a good amount, and a B router by a dramatic amount.
If you want to broadcast to a large radius you're going to want to go with a nice large N router with many antennae, for the range, not the speed.

Also, Airjoe mentioned google doing a fiber-optic experiment to some cities.

I don't really know anything about what he's talking about, but I do know that residential fiber connections are available already.

A friend of mine got on city council for a town near my own (Lincoln, California) when he was 19, a few years ago. They were doing some renovations of the area, and he proposed that while they had the ground dug up they should throw in a whole load of data lines. They went for it, and as a consequence you can get fiber connections through AT&T "UVerse" in Lincoln. These connections for the time being only run at maybe 3x the speed of the BEST cable, but that is nothing to sniff at.

As lines continue to get bigger the fiber connections will be allowed to use more space. The issue is the internet is a big web, and all the space is shared. The web needs to get bigger before the speeds can go up.

That being said, it is blatantly obvious that you will not hit LAN speed limits very easily as long as you aren't using old-school 10mb LAN cards. This will continue to be the case until a new networking method is developed, IMO.

Another thing to consider in regards to wireless N vs wireless G:
I've heard a lot of talk from techies about 'flooding the airwaves.' Some claim that if you live in a densely populated area that too many overlapping wireless connections will interfere with one another and make your wireless G network almost unusable. They say that wireless N, due to its increased power, will be able to handle more crowded airways.

Now, I personally don't think the above is a legitimate concern. The reason is this:
During a recent trip to San Francisco I hopped on the internet of a friend. There were over 3 dozen wireless networks available, and I was able to connect to theirs and benchmark without any problems at all.

If too many people WILL cause performance issues for wireless G networks, it is definitely far beyond the range of networks anyone will encounter, in my personal opinion because of that experience.

TL;DR: Wireless N gives more range, if nothing else.


EDIT: One other thing: if you google the world's fastest private internet connection you'll see a story about some guy with a multi-gb connection somewhere. Don't remember the specifics, but it is probably an interesting read.
Well, as previously noted, you are discussing LAN connectivity standards. This is much the same as the notion of the IEEE 802.3ah gigabit ethernet standard, which provides a degree of connectivity which is just not commercially viable in the US/UK for the POTS. It's intended provision is/was corporate backbones and sometimes local direct site connections, however advancing practices means copper gigabit is both cheap enough and easy enough to configure for home LANs.

WAN/POTS connectivity standards of the nature to match are not actually that uncommon, just expensive to implement. A good example of a standard that permits ISPs to leave the "last mile" of copper untouched (a very prominent issue in the UK, where the large majority of service is provided over copper) would be ITU-T G.993.2, or VDSL2. This would permit a best case through-put of 200 Mbps. The reason they don't use it is primarily because past a line length of 300m, the tolerance to noise becomes extremely poor, meaning through-put trails off horribly. As a common "last mile" distance in the UK is actually a mile, this is obviously unacceptable. Most other standards require laying new cable, which is pretty much what Verizon etc. do and Virgin Media in the UK. In the UK in particular this is very costly though, due to planning permission costs. And then finally BT's favourite issue, you need a backbone to support everyone having 100 Mbps+, which usually means doubling/tripling/20x up the exact same technologies you are delivering to them.
In response to Ulterior Motives
That must be a shitty 802.11n chip then. 802.11n is made to be backwards compatible with 802.11g and 802.11b. By going down in the standards you of course lose the advantages of N, but you shouldn't have any troubles connecting.
Kuraudo wrote:
I'm pretty good with computer software. But hardware and services and such are out of my domain, so excuse my ignorance.

Others have done a good job of explaining it, but let me throw my weight in as well.

From the business side: At my work we use applications that are network bandwidth intensive. In one application going from 802.11g wifi to gigabit wired networking causes opening a file to go from 30 seconds to 10 seconds. That's a huge change. If 802.11n can get it down to 15 seconds I consider it a huge win. Of course this is only opening the file, using the file has similar speed issues.

From the personal side: I have 3 PCs in my house. Two desktops and one netbook. I use one PC as a media server and backup spot for the other two. My two desktops are wired and gigabit ethernet. As a result backups are very quick and I can stream any video without any issues. The netbook, on the other hand, is just running 802.11g, so backups take considerably longer(per byte, of course) and I can't stream video through samba(standard windows file sharing) if it is any bigger than standard resolution. I would love to upgrade to 802.11n but my netbook has 802.11g built in and it isn't easy to upgrade. Also I love my gigabit LAN+802.11g router. When I bought it it was the only one on the market that had both(and I paid as much for it as I would have buying a normal router and a gigabit switch).
In response to Danial.Beta
I have the same setup router as you do, wirless-g(which only goes to my Wii at the moment)and 100 mbit wired lan. I use the wired portion to stream movies to my xbox 360 through various media servers, and it can stream hd without issue.
In response to Jotdaniel
Well, 100mbit can do pretty well for streaming video, but gigabit is a must if you are crazy like me and are moving several gigs across the network on a daily basis. If 10gb was a viable option today I'd be all over it. Nothing like moving files faster than my harddrive can write them.
In response to Danial.Beta
Yeah I dont know what i was thinking there, missed a 0 or something.
In response to Airjoe
Airjoe wrote:
Recently, Google has announced they'll be running an "experiment" in which they will provide gigabit fiber-to-the-home connections in one or a small few select cities across America.

We are only 1 month past the beginning of April. Are you sure they didn't announce that April 1st? Google does really like playing April Fools' pranks. It doesn't seem reasonable for them to go installing fiber optic to every home unless they are going to start it in a rich neighborhood. The only reason we can keep stringing up new electric lines is because the infrastructure for running that everywhere is already there, which it's obviously not for fiber optic.

If this is true, that'd be awesome.
In response to Danial.Beta
Danial.Beta wrote:
Nothing like moving files faster than my harddrive can write them.

Don't forget that the distance is a factor too. The farther you go from the CPU the longer everything takes, even if the bandwidth is higher.

Also don't forget that you generally don't get the speed that's actually listed on the device. I certainly don't get 100% efficiency out of my router; just because it has 100Mbps written on it doesn't mean that everything actually happens at even nearly 100Mbp. Mine is old though, so I'm willing to allow for the fact that they might work better now.

For speed, network is too slow for storage for the same reason memory is too slow for processing, I don't care if you have a 16-channel DDR5 (yes, I'm being sarcastic for dramatic effect) - memory is too far away from the CPU. Local disks are to persistence as cache is to processing.

Of course, I'm not snubbing my nose at 10Gbps. Pick one up for me too.

http://www.newegg.com/Product/ Product.aspx?Item=N82E16833129168
In response to Loduwijk
They announced this months ago, and have been considering where to hold their experiment for some time. It is far from a joke. Announcement post, project page.

Google has a vested interest in a fast Internet: the faster you can browse, the more ads they can serve, the more money they make. This is why they've been trying to improve Internet speeds for some time, first seen months back with their own public DNS servers. Fiber to the home has been discussed before, and if anyone can do it, Google can.
In response to Loduwijk
Distance doesn't hurt bandwidth, it hurts latency. In the case of moving large files around 10ms latency doesn't really get noticed. I don't know about you but I wouldn't notice an extra 10ms at the beginning of a file transfer.

Sure, networks never get the listed speeds, but neither do SATA cards. SATA just recently moved to 6gb, but mine is only 3.
In response to Loduwijk
Actually, by through-put network is plenty close enough. This was best evidenced by my creating a tmpfs on my machine (out of 2 GB of DDR2 800) and another on another machine on my LAN (again 2 GB of DDR2 800), then loading a 1780 MB file into the tmpfs of the latter. The route consisted of 4 hops, between a mix of off the shelf Netgear 1 Gbps switches and home built Linux PC routers (2 and 2 respectively). Transfered the file from that remote tmpfs to the current one (via my own application) and let Linux kindly report through-put for me (to save inaccuracies doing it myself).

Transfer completed in 14.27 seconds, providing a through-put of 124.8 MB/s. End to end latency between the machines sat at 0.05 ms, non-sequential read/write access on DDR2 800 is about ... 20 ns I think, so would not impact that latency. The disk in my machine reads / writes (locally) 90/55 MB/s with a non-sequential seek mean of 8 ms. 2 GB of DDR2 800 at the end of 100m of CAT6 represented a superior storage solution by pure performance to my local SATA2 disk. Much like Danial's scenario, we always (except in the case of the RAID 5 array) wrote files at the best speed the target hard-drive could as this was our bottleneck.

The scenario for through-put would have been much the same had we extended each hop to the reasonable maximum cable length for copper IEEE 802.3, 300m. We could have covered 1200m of distance with an effective through-put in that scenario of close to the 125 MB/s maximum permitted by the standard, cable quality not withstanding. The difference is latency, which would have been of the order of ~0.25 ms, each 300m section not being permitted a latency of worse than 0.05 ms as this affects the protocol's ability to signal error on the wire. Windows manages to blow 0.5 ms at least in it's network stack simply processing the ICMP packets, which makes this scenario on Windows another story.
In response to Stephen001
That sounds like an awesome tool for testing a network. You should package it up with a fancy GUI and release it. I would love to be able to test the real world speeds of my network without worrying about the limitations of the protocol used to send the file(SMB, for example, is a bad way to test).
In response to Danial.Beta
netio already does the testing bit more competently than I, although it can be undone by MTU http://freshmeat.net/projects/netio/
In response to Danial.Beta
Danial.Beta wrote:
Distance doesn't hurt bandwidth, it hurts latency.

Which is what I said. That was the point of my post, though I never used the word latency.

In the case of moving large files around...

Since Stephen actually ran some tests and posted some numbers, I will give in. I am surprised at his results, though I am certainly not bummed about it - on the contrary, I am glad that I have been shown to be inaccurate and that with the increasing trends in networking that my statement will continue to be ever more inaccurate.

Let's all get small solid state drives to install our operating systems on, upgrade to at least 10Gb networks, and set up proper file servers. Now I am salivating; I really don't like waiting for my computers to start up or to load anything.
Kuraudo wrote:
Wireless-G routers offer a maximum bit rate of 54Mbps. Wireless-N routers commonly offer a maximum of 150Mbps or 300Mbps, though the specification itself supports a maximum of 600Mbps.

That's all very good. It really is. But my question is this: why? Where does one even find and ISP that gives you those kinds of speeds
<small>1 I did NOT encourage the switch to ZoomTown from RoadRunner.</small>

Our area is going to be offering service close to that sometime soon.

Road Runner (Through Brighthouse) in Central Florida is adding a new plan this summer called "Road Runner Lightning". It's 40mbps Down (Link) / 5 Up(I believe, I have not seen this listed anywhere for CFL yet, I took it from Tampa's RR service which has lightning at 40/5) bandwidth.