User:Quizzical/Hardware

So the computer you've got can handle Guild Wars just fine. Great. But Guild Wars 2 is coming, as are various other games, and people need to replace a computer sooner or later. This post is my advice on what to buy. It's mostly copied and pasted from this thread. It's a copy and paste of my own writings, so it's not plagiarism. I'll probably keep that thread more up to date than this page, though.

-

One big factor is whether you can assemble a computer from components yourself, or need to hire an OEM to do it for you. Assembling a computer from components isn't as hard as you might think if you haven't done it, as a motherboard will come with very detailed instructions. If you don't know what components to get, that's what this thread is for.

If you want to go the OEM route, Cyber Power PC and iBuyPower sell the cheapest gaming computers in the United States. I don't know where to look if you buy a computer in a foreign country, but the parts will mostly be the same. They'll give you the option of getting cheap junk components at a low price tag, or paying more for something better. The "better" doesn't only mean "faster performance", but can also mean "greater reliability". Big name OEMs like Dell and Hewlett-Packard are better thought of as tech support companies. If you buy from a company like that, you pay a big price premium for the hardware you get, and in exchange, they'll help you set it up and give you good tech support if something goes wrong. They cater more to people who are clueless about computers.

I'd strongly recommend getting a desktop and not a laptop. A laptop costs about twice as much as a desktop to get comparable hardware. A gaming laptop will have a very short battery life, so you'll have to mostly keep it plugged in, anyway. Laptops are hard to cool, making them more likely to have problems than desktops, and it's also much harder to fix any problems that do arise. Laptops are much harder to upgrade, so if you discover a couple of years down the road that your desktop would be perfectly good if only one component were better, you can upgrade it. For a laptop, you'd probably have to buy an entirely new computer. Desktop computers will also have far better video card drivers, regardless of whether you go with Nvidia or AMD.

Processor
This is really a question of what you're willing to pay. If your budget includes at least $200 for a processor, go with Intel. A Core i5-750 is a very nice quad core processor for about $200.

A Core i7-930 or Core i7-860 is the next step up, at a little under $300 for the processor. A Core i7-860 is a little better processor than a Core i7-930, but they take different chipsets and processor sockets, so the choice between these two is really a choice of chipset. Both processors have four cores plus hyperthreading, which can improve performance a bit in programs that would be able to take advantage of eight cores if you had them. A Core i7-860 will perform significantly better than a Core i7-930 in single-threaded programs, and it also uses far less power.

For a money is no object processor, the Core i7-980X is the top of the line six core processor, and costs $1000. I dare say that in Champions Online, you'll never notice a difference between any of these Intel processors, and it will be at least several years before the relatively cheaper Core i7s will struggle with games, other than those of the badly-coded, single-threaded variety.

For under $200, AMD offers better value for the money at any given price point than Intel. AMD's modern lineup consists of Athlon II and Phenom II processors. An Athlon II is just a Phenom II with the L3 cache removed. On average, a Phenom II will perform about 10% faster than an Athlon II with the same clock speed and number of cores.

AMD's naming convention makes it easy to tell what you're getting. If the processor ends with X2, it has two cores; X3 is three cores; and X4 is four cores. AMD's new six-core "Thuban" processors are a Phenom II X6. They are also readily distinguishable by having a T at the end, as in Phenom II X6 1090T. Incidentally, the T apparently stands for "turbo" not "thuban".

An Athlon II X4 costs about $100 and will give you plenty of performance for most games for quite some time. Various Phenom II X4 processors cost around $150 or so, depending on the clock speed. An Athlon II X3 can be had for around $75, and an Athlon II X2 will set you back about $60. Going with a processor slower than that is a bad idea.

For comparison, an Intel processor of the Nehalem/Westmere architecture tends to perform about 20% faster than a Phenom II with the same clock speed and number of cores. AMD tends to clock their Phenom II processors higher than most Intel processors to compensate, and AMD's best quad-core processor, the Phenom II X4 965, is roughly competitive with Intel's Core i5-750.

It's because of this better performance per clock cycle that Intel's Core i5/i7 processors tend to be better than AMD's processors in the $200-$300 range. Six Thuban cores do beat four i7 cores in programs that can use all six cores, but games often can't, and even for those that can, four cores will be plenty to get good frame rates for years to come. As such, I can't recommend Thuban over a Core i7 for gamers building a new computer, though I'd recommend Thuban over a Core i7 in the same price range for someone who uses programs that do scale to six cores, such as some video editing programs.

If you want an Intel processor for well under $200, have a look at the Pentium G6950 or the various Core i3 processors. They're all dual core processors, and not as good of a deal on performance per dollar as an Athlon II X4 quad core. Better performance per core just isn't able to make up for the difference between four cores and two, except in programs that can't use more than two cores. If future games are processor-bound, they'll probably be able to take advantage of at least four cores, as quad-cores processors will be pretty mainstream in a few years. A Core i3 does use considerably less power than an Athlon II, but I don't think it's a big enough difference to matter much.

Intel's processor naming scheme is an incomprehensible mess. I'd advise trying to make sense of it by ignoring the i3/i5/i7 part of the name and looking at the number after it. 920-975 is a "Bloomfield" quad core with hyperthreading. 980X is a "Gulftown" six core processor with hyperthreading. 8** is a "Lynnfield" quad core with hyperthreading. 7** is a "Lynnfield" quad core with no hyperthreading. 6** is a "Clarkdale" dual core with hyperthreading that is massively overpriced. 5** is a "Clarkdale" dual core with hyperthreading but no turbo boost, so they're really not that fast. A Pentium G6950 is a "Clarkdale" dual core with no hyperthreading and no turbo boost.

If an Intel processor ends in "S", that means it's a lower clocked, energy efficient version. Or as one wag put it, the "S" stands for "Slow".

Motherboard
The first thing to check is to make sure you have the right processor socket. A Core i7-9** processor needs an LGA 1366 socket. Any other modern Intel processors need LGA 1156. All current AMD processors use Socket AM3.

The LGA 1366 socket comes exclusively with Intel's X58 chipset. This has 36 PCI Express 2.0 lanes, which is plenty enough for two video cards. If you're going to have three or four GPUs in system with an Intel processor, you really should get an LGA 1366/X58 motherboard to have enough PCI Express 2.0 lanes. For two single-GPU cards in SLI or CrossFireX, there's a decent case for going with the X58 chipset as well.

If using multiple Nvidia video cards in SLI, you need a motherboard that is SLI certified. All that SLI compatibility in a motherboard means is that the manufacturer has paid an "SLI tax" to Nvidia to get Nvidia to have their drivers enable SLI on that particular motherboard. If using multiple AMD video cards in CrossFireX, all you need is to make sure there are two PCI Express x16 slots, and pretty much all LGA 1366/X58 motherboards do.

The LGA 1156 socket can be paired with Intel's P55, H55, or H57 chipsets. The H55 and H57 chipsets are rather low end, and designed to make use of the integrated graphics built into "Clarkdale" processors. Integrated graphics are bad for gaming, of course, and Intel's integrated graphics are much worse than Nvidia or AMD integrated graphics. They're really targeted at businesses that want Intel's low end integrated graphics.

If you get a Core i5-7** or a Core i7-8**, you need a P55 chipset. If you get a Core i3 against my advice, you can still pair it with a P55 chipset motherboard, though that won't let you use Intel's horrible integrated graphics.

If getting an AMD processor, make sure you get a socket AM3 motherboard, of course. Some motherboards take DDR2 memory and some take DDR3; you should get one that uses DDR3 memory, as that's the modern standard. AMD's 785G chipset is their low end integrated graphics, and is in the process of being replaced by their newer 880G chipset. 890GX is their (relatively) high end integrated graphics. Neither of those matter much if you're not going to be using integrated graphics. AMD's 770 chipset is appropriate for a cheap AMD motherboard. The 770 chipset is in the process of being replaced by a newer 870 chipset, but at the low end, I'd expect 770-based motherboards to remain cheaper for a while to get rid of inventory.

AMD's high end 790FX chipset has more PCI Express lanes, making it the right chipset to get if you're going to have three or four GPUs, and it will give you somewhat better performance than a 770 chipset if you have two AMD video cards in CrossFireX. 790FX is being replaced by 890FX, and the latter is probably a better buy if you're going CrossFireX and it's out by the time you buy a motherboard. You can still get decent performance on a 770 chipset motherboard from two cards in CrossFireX if the motherboard has two PCI Express x16 slots that can run at x8/x8.

In order to run SLI on an AMD motherboard, you'd need an Nvidia nForce chipset. This is a bad idea unless you're actually going to use multiple Nvidia video cards in SLI, as passing the PCI Express lanes through an extra chip adds latency, and will give you worse performance for a single card than if the extra chip weren't there at all.

As for a motherboard brand, Asus and Gigabyte are the biggest name brands. EVGA makes some good motherboards, too, but they're expensive. MSI is another big name brand. After that, it's mostly cheap junk, though there some decent motherboards from Asrock, Biostar, and maybe a few others.

Asus and Gigabyte have added USB 3.0 and SATA 3 (officially SATA 6 Gbps, and they hate it if you call it SATA 3, which everyone else does) to some of their motherboards, which can be nice for future compatibility. USB 3.0 will make a huge difference if you plan on adding an external hard drive. SATA 3 probably only matters if you're going to add a good solid state drive in the future that can go faster than SATA 2 allows.

With any given socket and chipset, higher end, more expensive motherboards tend to add more features than lower end ones. That only matters if you'll use the extra features, of course. One common thing is more phases in the processor power supply (the small black boxes in a line or two next to the processor socket), which allows the motherboard to deliver more power to the processor. Anything beyond about eight phases doesn't make a bit of difference unless you're going for an extreme overclock (as opposed to a moderate overclock). Some will add more expansion slots, more memory slots, more USB slots on the back panel, or whatever. At minimum, make sure there is a PCI Express x16 slot and four memory slots.

Video card
The short version of this is that if you're going to spend over $150 on a video card, I'd recommend getting a Radeon HD 5770, 5850, or 5870, depending on your budget. And yes, I'm aware of the GeForce GTX 465, 470, and 480, and recommending that you not buy them. They're a heat-related card failure waiting to happen, at least with the reference cooler. They're also loud to the point of being obnoxious. You can pay more for a better cooler, but they're still not a good value for the money, unless you really need some of Nvidia's special features.

For under $150, it depends on a lot of factors, including what happens to be available at a discount that day. Here are the modern mid-range and low-end video cards, as well as a price on each that I'd regard as being decent value for a 512 MB version and the thermal design power (TDP) for each card:

GeForce GT 220 (128-bit DDR3): $50, 58 W Radeon HD 4670 (128-bit DDR3): $60, 70 W Radeon HD 5550 (128-bit DDR3): $60, 39 W Radeon HD 5570 (128-bit DDR3): $70, 39 W GeForce GT 240 (128-bit GDDR5): $80, 69 W GeForce 9800 GT (256-bit GDDR3): $80, 105 W Radeon HD 5670 (128-bit GDDR5): $90, 64 W Radeon HD 4770 (128-bit GDDR5): $90, 80 W Radeon HD 4850 (256-bit GDDR3): $100, 114 W GeForce GTS 250 (256-bit GDDR3): $100, 150 W Radeon HD 5750 (128-bit GDDR5): $120, 86 W Radeon HD 5770 (128-bit GDDR5): $150, 108 W

The prices above are not necessarily the prices at which I'd expect to find the video cards for sale. Rather, they're prices at which I'd regard the card as being a reasonably good value, and basically say about how good the various cards are relative to each other. For example, if they're the same price, I'd recommend getting a Radeon HD 5670 over a GeForce GT 240, but it's not a huge difference, so if the latter is $20 cheaper, it's a better deal. I did compensate a bit for power consumption and features in listing the prices. For example, a GeForce 9800 GT is a little faster than a GeForce GT 240, but the latter will use less power, so it will cost you less over the lifetime of the card even if you pay the same price up front.

I listed the memory type and memory bus width for all of the cards, mainly because there are sometimes cards that use an inferior type of memory or a narrower memory bus, and it cripples performance. The difference between DDR3 and GDDR3 memory doesn't particularly matter, but both are a lot faster than DDR2 or GDDR2 memory, and DDR2 should be avoided. You may occasionally see cards with GDDR4 memory, though those seem to mostly be off the market; that's basically equivalent to GDDR3. GDDR5 memory is much faster than DDR3 or GDDR3, so if a video card is supposed to have GDDR5 memory and has DDR3 instead, that will cripple the card.

The most common cases of cards with inferior memory are a GeForce GT 220 with DDR2 rather than DDR3, and a GeForce GT 240 with DDR3 instead of GDDR5. Avoid the versions of cards with the inferior memory type (or subtract $20 from the price listed above, if you think you may have found an awesome deal), as they won't perform nearly as well.

Occasionally there are cards that disable a memory channel, to have a narrower memory bus width than the card ought to have. Avoid such cards, as that will kill the memory bandwidth and cripple performance.Of course, one might care more about performance in Champions Online in particular than some theoretical "average" game. If an Nvidia card works properly, it tends to perform somewhat better in Champions Online compared to AMD cards than its position on that list would indicate. But that's a huge "if", as Nvidia drivers don't always play nicely with this game. If it doesn't work properly, the game might give you terrible performance, or be unplayable entirely. Sometimes performance can vary wildly from one Nvidia driver to another, with different players getting contradictory results as to which drivers work better.

The long version is that most games will have the performance mainly determined by how good of a video card you have, so this matters greatly. There are four basic components to video card performance: amount of video memory, memory bandwidth, computational GFLOPS, and features/API compatibility.

For the amount of video memory, your card either has enough video memory for the game you're playing, or else it doesn't. If you have more video memory than you need, it doesn't matter how much more you have than you need; having too much won't affect performance. If it doesn't have enough video memory, it has to borrow some system memory, which is really slow, and can kill your game performance. How much is enough varies considerably from one game to the next, and also varies by the graphical settings and monitor resolution you use within a single game. Basically, if you use a monitor resolution of 1280x1024 or lower, 512 MB is enough. If you play games at a higher resolution, you should probably get a 1 GB card to be safe, even though 512 MB will often be enough at 1680x1050 or so.

Computational GFLOPS and memory bandwidth basically go together. The latter is how fast the card can move data from the video memory to the GPU chip to process it, while the former is how fast it can actually do the processing. What you really want here is balance, though more of both is better, of course. A card with huge amounts of memory bandwidth but a very weak GPU chip can only go as fast as the GPU chip can crunch data, so most of the memory bandwidth will be unused. An incredibly powerful GPU chip on a chip without much memory bandwidth will spend most of its time idle, waiting for the needed data from memory to arrive.

On the memory bandwidth side, different cards will use DDR2 memory, DDR3, GDDR3, or GDDR5. Some will use a 64-bit memory bus, or 128-bit, or 256-bit, or 384-bit, or whatever. While those can have a considerable impact on how expensive it is to build a card, they don't affect performance directly. Rather, they're just inputs that feed into the theoretical memory bandwidth of the card, which is the number that matters. It doesn't matter for performance reasons whether a theoretical card has 400 MHz DDR2 on a 384-bit bus or 1.2 GHz GDDR5 on a 64-bit bus, as they'll give the same 38.4 GB/s of memory bandwidth. (The latter would be a lot cheaper to build than the former, though.)

GFLOPS (billions of floating point operations per second) are comparable between two cards of the same architecture, but not so much between cards of different architectures, because it's much easier to use most of the theoretical computational power in some cards than others. In particular, comparing the raw GFLOPS number from an AMD card to an Nvidia card is a bad idea. A Radeon HD 5850 can do over 2 TFLOPS (1 TFLOPS = 1000 GFLOPS), while a GeForce GTX 285 can barely do over 1 TFLOPS, but the former card tends to perform only slightly better than the latter.

Finally, we get to features and compatibility. Games need video cards to be able to run whatever code the game uses. If a card is compatible with the necessary APIs, the game runs. If not, the game may flatly refuse to run at all, or may run but with some graphical features disabled because the video card doesn't know how to do them. Most games use some version of DirectX, though some use OpenGL instead. The Radeon HD 5000 series and GeForce GTX 400 series are compatible with DirectX 11; any other recent video cards are not, but can do DirectX 10 (and lower). Some cards will list DirectX 10.1 as their DirectX version; in practice, the difference between 10.1 and 10 doesn't matter.

There are very few DirectX 11 games out, but most games that come out a few years from now will probably use DirectX 11. Most games today only use DirectX 9 (or rather, 9.0c); DirectX 10 never really caught on because the features brought too big of a performance drop for not enough of an image quality improvement, and because many gamers still used Windows XP, which can't do DirectX 10 at all, so making a DirectX 10 version of a game requires also making a DirectX 9 version separately, which is expensive. DirectX 11 avoids both of these problems; tessellation is the big killer feature of DirectX 11, and it can allow lower polygon models plus tessellation to actually improve performance, while also improving image quality. DirectX 11 also allows programmers to code a game just once, and if someone has a DirectX 9 or 10 video card, it will run DirectX 11 code, and merely disable whatever features the card can't handle.

Both AMD and Nvidia are pushing GPGPU, that is, ways of taking highly parallel code that would have traditionally run on a processor, and running it on a video card instead. It's unclear whether this will catch on. Nvidia is pushing their proprietary CUDA approach, which is unlikely to catch on, because anything coded for CUDA won't run on most computers, because most computers don't have a high end Nvidia video card. AMD is mostly pushing OpenCL, which is more likely to catch on, as something written in OpenCL can run on processors or video cards made by any company. AMD's Radeon HD 5000 series supports OpenCL. Some recent Nvidia video cards do as well, though I'm not sure how far back support goes. It's unclear whether any form of GPGPU will ever catch on in the general consumer space.

Another thing that may move to video cards is physics computations. Nvidia has been pushing this for a while with PhysX, but it hasn't really caught on. AMD is now starting to push this with Bullet and Pixelux. The idea is that video cards are far better at handling highly parallel computations than CPUs, and the computations to track thousands of debris objects are highly parallel. As such, physics computations that are far too much for a processor to handle in real time may run just fine on a video card.

Personally, I think this is a completely inane idea. When you're playing a game, your video card is already fully in use rendering the game. Meanwhile, your processor may be sitting there half idle. So now you want to move more computations off of the processor and onto the video card? That's just nuts.

Yes, GPU physics can do more complex physics than a processor can handle, but that can't be used for anything but eye candy in online games. It's hard enough to keep all players synchronized when you have ten characters moving around. Trying to synchronize ten thousand pieces of debris is completely out of the question. And even for eye candy, it's not that great. Higher monitor resolutions, anti-aliasing, tessellation, longer view distances, and higher frame rates are all far more important to image quality than GPU physics--and even the highest end video cards can't do all of those simultaneously as well as one might like. (If you think your card can handle everything you've thrown at it, that's because you haven't thrown a highly tessellated game at it, in part because they don't exist yet--which is, in turn, because most hardware can't handle it.)

The real solution to GPU physics is to have two video cards: one for the 3D rendering and one for physics. Nvidia has been pushing a separate PhysX card for quite some time. But that's never going to catch on, either, as how many people are going to want to spend an extra $100 for a bit of eye candy that doesn't affect gameplay? Most won't, and then how many game developers are going to spend a bunch of money on fancy physics effects that the overwhelming majority of their players won't be able to see? The handful that Nvidia pays to implement GPU PhysX will (which is where the few PhysX games on the market came from), but that's it.

Another feature that is coming along is 3D--as in, stereoscopic with glasses. My view is that 3D glasses have been a dumb gimmick since the 1950s, and will remain so for the forseeable future. It's intrinsically too blurry, requires too high of frame rates to look decent, and is too expensive for something you'll likely be sick of in under an hour. There are good reasons why the Virtual Boy bombed. Nvidia has a proprietary solution out, while AMD is working with some third party vendors and will try to support whatever they come up with. Either way, you'll need a 120 Hz monitor--and to turn down graphical settings far enough to actually get 120 frames per second.

Perhaps the final significant feature difference is multi-monitor support. Most cards on the market support two monitors at a time, but only one can be used at a time for 3D rendering. AMD's Radeon HD 5000 series cards support "Eyefinity", which allows you to connect three monitors to a single card, and to spread 3D rendering across all three monitors for a much larger image. This implicitly creates very high monitor resolutions, which can bog down a video card unless you turn graphical settings down somewhat. Some people say it's great for first person shooters, to get some semblance of peripheral vision. Here you can get all the peripheral vision you want by zooming out.

The last thing to consider is whether to get just one single-GPU card, or whether to go with multiple GPUs in an SLI or CrossFireX setup. My answer to that is that you should get one single-GPU card, unless you're willing to spend a fortune for multiple high-end cards. The alternate frame rendering approach means that two cards in parallel may give you double the average frame rate of a single card, but it's not nearly as good for gameplay as actually getting double the performance from a single card. Perhaps an example will demonstrate.

If you get a Radeon HD 5870 (Cypress), it has twice as much of most things on the chip as a Radeon HD 5770 (Juniper). (There are some exceptions for portions where every Radeon HD 5000 chip has exactly the same stuff from the very low end to the very high end, because it doesn't limit performance at the high end.) Conveniently, they're also clocked the same, at 850 MHz shaders and 1200 MHz memory. So there's a question of whether you'd rather have one Radeon HD 5870 or two Radeon HD 5770s in CrossFireX.

My answer to that is that I'd rather have the 5870, and it's not close. A 5870 will use less power than two 5770s. If you go by the TDPs, it's 188 W versus 216 W. It will also use less power at idle, officially rated at 27 W versus 36 W. The single 5870 will make less noise. It will be much easier to get good airflow, as you won't have one card physically blocking the fan of the other, as is likely to happen if you get two 5770s. This will make the 5870 run cooler. The 5870 will work just fine in a cheaper motherboard, as it needs just a single PCI Express x16 slot, while two 5770s need two PCI Express x16 slots--and both should be wired for at least PCI Express x8 bandwidth, which relatively cheap motherboards typically won't do.

And then there is performance. The 5870 can render a frame in about half of the time of a 5770. That means that a 5870 could get you theoretically double the frame rate of a 5770, but the two 5770s will get about the same average frame rate as the 5870. But the key difference is that a 5870 will render a frame, display it, and then render another frame. Meanwhile, the 5770s can't split a frame and each render only half. Instead, they have to each render a frame completely separately, and send it to the monitor when it's ready.

Let's suppose that the 5870 renders a frame every 20 ms, giving you a smooth 50 frames per second. Let's also suppose that you have vertical sync on, for simplicity. When your monitor goes to grab the most recently completed frame, on average, it finished 10 ms ago. An average portion of that frame was rendered an additional 10 ms before the frame was finished. If your monitor updates at a frequency of 60 Hz, it grabs and displays the most recent frame about every 17 ms, so on average, what it's showing is what it grabbed 8 ms ago. (You could perhaps increase this very slightly if you want to account for the monitor response time). Add those together and at an average moment, what you see on the screen is not what the game world looks like right that instant, but what it looked like 28 ms ago.

Now suppose that the two 5770s each render a frame every 40 ms. They each deliver 25 frames per second. If there's a benchmark (say, /fpsgraph), it would say you're getting 50 frames per second, the same as with the 5870. But it's not at all the same 50 frames per second.

Let's suppose that you get perfect scaling from CrossFire. Every 20 ms, one of the 5770s completes a frame and sends it to be displayed. Thus, the average time since the last frame was completed is 10 ms, the same as with the 5870. The average time since the monitor last grabbed a frame to display is 8 ms, again the same as with the 5870. But because a frame took 40 ms to render, an average portion of the frame was rendered 20 ms ago, not 10. Add these together and what you see is what the game world looked like 38 ms ago, rather than 28 ms. The 5870 isn't just better. It's a lot better.

And that's in the best possible case. Suppose that instead of a 20 ms gap between when the 5770s finish a frame, one finishes a frame, then 10 ms later, the other finishes one, and then 30 ms later, the first finishes another, and so forth. This time, we're alternating 10-30-10-30 rather than 20-20-20-20. Now the average time that the most recent frame has been sitting there before it is grabbed is 12.5 ms, not 10 ms. Now the average time gap between what you see and what the game world actually was is 41 ms for the pair of 5770s. Meanwhile, the 5870 is still sitting there at 28 ms.

If you don't see why that averages to 12.5 ms rather than 10 ms, let's consider the worst possible case. Every 40 ms, the 4770s complete a frame at exactly the same time. One of the frames is discarded as though it weren't even there. While a benchmark will still report that you're getting 50 frames per second, the game will look exactly the same as if you were getting only 25 frames per second. Now the average time delay since the last frame was completed is 20 ms. On average, the screen would show what happened 48 ms ago.

And that's assuming that the game even scales from CrossFire at all. Some games aren't designed for it, so the game may just use one card and ignore the other. In that case, it's pretty obvious why a 5870 would be better than two 5770s if the game only uses one of the 5770s. Sometimes you can get it to work even if it doesn't want to work the first time you try it, but it's a major headache. With a single card, you don't have to worry about that.

So why would anyone get multiple cards for gaming? If you're getting the highest end card you can, you do get some performance improvements. With two 5870s instead of one, on average, the time since the last frame was completed could be as little as 5 ms, and at worst would be the 10 ms that you got with only one frame. That trims a few milliseconds off of the time delay before what you see. Interpolating extra frames also makes the animation look a bit nicer.

Another important factor to consider is power consumption. Some people will say they don't care about power consumption at all, but only performance. But you should care about power consumption at least a little. Getting the same performance with less power reduces your electricity bill. It also means your card puts out less heat, which keeps various components inside your case cooler, and this increases their reliability. Lower power consumption means it takes a less capable power supply to power your computer, and that it won't take as nice of a case to keep components reasonably cooled. Given a choice between a card that uses 200 W at load, and another card that gives identical performance but only uses 100 W at load, you should prefer the latter. The only question is by how much you should prefer the latter; if it costs an extra $100 for the reduced power consumption, it's a waste of money to pay that.

The entire Radeon HD 5000 series is far and away the best on the market in performance per watt, at all levels of performance. Other cards on the market tend to be pretty comparable to each other in performance per watt. At a given level of performance, other cards, whether Nvidia or AMD, will use about the same amount of power, while a Radeon HD 5000 series card will use around 30% less. The real reason for this is that most other cards are made on TSMC's 55 nm process, while Radeon HD 5000 series cards are made on a 40 nm process that can do the same things with about 30% less power. Some newer Nvidia cards are made on the same TSMC 40 nm bulk silicon process as the Radeon HD 5000 series, but Nvidia is currently struggling to get that process to work at all, and so far, hasn't demonstrated the sort of power savings that they ought to get from it.

Finally, even if you've figured out what card you want to get, there are a bunch of different models of it from different vendors. AMD and Nvidia don't assemble their own cards and sell them directly to the public; rather, they make the key GPU chip and sell that to board partners, who actually assemble and sell the completed cards. Picking a different brand means getting a different warranty and having to deal with a different company if you need to make good on that warranty. It can sometimes also mean different quality of board construction.

When a card is first launched, AMD or Nvidia usually has a reference design that they send to their board partners, and the partners basically put their own stickers on the cards, but otherwise sell mostly identical cards. As time passes, board partners make various changes to the cards to come up with new models of, for example, a Radeon HD 5770. This goes in two different directions.

One is that board partners try to cut costs to make it cheaper to produce the cards, so that they can still get the proper performance, but sell the cards for cheaper. The most prominent thing is that they'll put a different heatsink and fan on the card that is cheaper to build but can still keep the card adequately cooled. Some cards will get two or three fans; others will be passively cooled with no fans at all, which is a bad idea for a gaming card but fine on a low end card that will never emit more than 20 W. The real goal of board manufacturers on this side is to make something good enough for cheaper.

At the other end, and mostly only on higher end cards, board partners will sometimes use better components and a better cooling system and then charge a premium for the card. Sometimes they'll also clock components higher than the default, and they may bin chips themselves to pick out the chips that can run faster and put them in the more expensive cards. For example, Sapphire has their "Vapor-X" cards, MSI has their "Lightning Edition" cards, Gigabyte has their "Super Overclock" cards, and so forth. All of those should run significantly cooler than a reference card at the same clock speed, and have considerably more overclocking headroom.

Some cards will take one expansion slot, while others will take two. For a gaming card, having the extra room of a two slot cooler really is better. The only real exception is if you're liquid cooling the card, which doesn't need a lot of room for a big heatsink attached to the card, but that's super high end stuff.

Another important way that cards vary is that they have different monitor ports. DVI is the most common monitor port, and has largely replaced D-Sub (also known as VGA). HDMI is also around, but is being phased out, after never catching on in the first place. DisplayPort and Mini DisplayPort is the solution of the future that AMD is pushing. If your card doesn't have the right ports to connect your monitor(s), you can buy adapters, and they're pretty cheap--but given the choice, you'd rather get a card with the right ports by default so that you don't have to buy an adapter.

Note that however many monitor ports a card has, most cards will only let you use two of the ports at a time. Radeon HD 5000 series cards will let you use more than two, but only two DVI, D-Sub, or HDMI ports (in total), and anything beyond two has to be DisplayPort or Mini DisplayPort. Furthermore, to add a third monitor to a single card with an adapter to put a monitor of a different type into a DisplayPort or Mini DisplayPort connection, it has to be a powered adapter, and those are expensive. There are cards with five or six DisplayPort or Mini DisplayPort connections, and those really can support five or six monitors at a time--but still only two DVI, D-Sub, or HDMI monitors without powered adapters.

Finally, let's talk about what's coming in the future. There's nothing earth-shattering coming in the near future (for that, you'll have to wait for a 28 nm die shrink perhaps in mid 2011), so if you're looking to buy a new computer now, there's no need to wait. Nvidia will will launch some mid-range and low-end cards reasonably soon based on the same "Fermi" architecture as the GF100 chip that powers their current GeForce GTX 465, 470, and 480 cards. Because it's the same Fermi architecture as GF100, it will probably lose badly to AMD cards on performance per watt and performance per mm^2 of die space for the same reasons that GF100 lost so badly to Cypress. That said, the difference between 100 W and 150 W isn't nearly so important as the difference between 200 W and 300 W, so they're likely to be decent cards. Still, I'd expect Nvidia to be reluctant to start a price war when it costs them a lot more than it costs AMD to make a card with equivalent performance.

AMD has promised to release a refresh of their entire lineup ("Southern Islands") in the second half of 2010. Right now, they dominate the high end of the market, so they're in no hurry to kill sales by encouraging people to wait for their next generation of cards, so there isn't much information on those yet. Rumors are that it will keep the same shaders as their previous "Evergreen" generation (Radeon HD 5000 series), while redesigning the rest of the chip to try to more efficiently feed data to the shaders, which is something that Evergreen struggled with. The GPU chips will probably be made at the same 40 nm node as their current generation of cards, as no foundries will have a 28 nm bulk silicon process available until around the end of the year in an optimistic scenario--and delays could easily push availability of a 28 nm process far into next year.

Memory
This one is easy. You should get 4 GB of DDR3 memory, in a kit with two modules of 2 GB each. The only exception is if you're using an LGA 1366/X58 motherboard, in which case, you should get 6 GB of DDR3 memory, in a kit with three modules of 2 GB each. If you get a Core i7-860 or -870 processor, you should get 1600 MHz memory, as those have a faster memory controller that can handle 1600 MHz memory. Otherwise, either 1333 MHz or 1600 MHz is just fine, as you'll probably run it at 1333 MHz anyway. Core i7-9** processors officially only support 1066 MHz memory. Also be sure to pick memory with a stock voltage of 1.65 V or less; lower is better here.

Storage
There are two factors to consider for storage. First is whether you have enough space for everything you want. Second is how fast your storage is. You can get a good idea of how much space you need by checking your hard drive right now and seeing how much space you've used. Get something with twice as much capacity to allow for future bloat and you'll probably have plenty. If you run out, you can add another hard drive in the future.

The other thing to consider is speed. Here, we're in the middle of the solid state drive revolution. The basic problem with traditional rotating platter hard drives is that they're really slow. Sure, manufacturers will quote things like 100 MB/s read or write speeds, and if they could actually deliver that speed under typical circumstances, that would be plenty. But that's sequential read and write speeds, and most reads and writes aren't sequential.

The problem with hard drives is that in order to read data off the drive, a hard drive platter has to physically spin to the right spot and move the drive head to the right spot before it can even start. On a typical hard drive, this takes about 15 ms. That may sound fast until you consider what happens if you have to read hundreds of files at a time. Or, for that matter, one big file physically fragmented in hundreds of places across a hard drive. In that case, you get to sit there and wait. In 4 KB random read tests, 500 KB/s is pretty good for a hard drive, and only very high end drives such as a VelociRaptor can break 1 MB/s. So much for the 100 MB/s speeds quoted by hard drive manufacturers.

Hard drive manufacturers have tried to improve this somewhat. One approach is to make the hard drive platters spin faster. A VelociRaptor spins at 10000 RPM, as compared to 7200 RPM for most hard drives. A VelociRaptor or Caviar Black has two drive heads per platter rather than one. These do help some, but not enough to really fix the problem--and VelociRaptors are very expensive, too.

The real solution is a good solid state drive (SSD). SSDs use NAND flash memory rather than rotating platters, so there are no moving parts. This means that there are no situations where you have to wait for a part to move to the right position to continue. If you need to get a small amount of information off of an SSD, it takes around 0.1 ms, rather than the 15 ms or so typical for a hard drive. In 4 KB random read tests, good solid state drives typically can do about 30 MB/s. Some very new ones can do a lot more than that, even. That means that you don't constantly have to sit there and wait on your computer.

Solid state drives have other benefits, too. Not having moving parts means that there aren't moving parts that can break. Traditional hard drives have a drive head mere nanometers away from a platter rotating furiously at 120 revolutions per second. If the head actually runs into the platter, it kills the drive. If that sounds precarious, that's because it is. Take a hard drive and bang it into the corner of a desk while it is operating and you likely kill it. Solid state drives, on the other hand, aren't quite indestructible, but they're pretty close. One company posted a video of playing baseball with one of their SSDs (using the drive as the "ball") without breaking it.

Solid state drives also use far less power than hard drives. Most hard drives will use several watts at idle, and up to about 9 W at load. Most SSDs use 1-2 W at load, and far less than that at idle. Furthermore, SSDs are nearly always idle, as if you want to read or write data with them, they perform the operations very fast, and then go back to being idle. For a desktop computer, this doesn't matter much, but the power savings can be helpful in a laptop.

Unfortunately, while there are some very good solid state drives out there, there are also some very bad ones. All solid state drives are far superior to all hard drives at random reads. Random writes are more of a mixed bag. Hard drives are slow at random writes for about the same reasons that they're slow at random reads, but not as slow, because they can pick a different physical spot in which to write the file. A decent hard drive can go over 1 MB/s in 4 KB random writes, and some can deliver 2 MB/s or more.

Solid state drives vary wildly in random write performance. Some based on a JMicron controller can only do about 0.01 MB/s, which is abyssmal. You're better off getting a hard drive than one of those. SSDs based on Intel's second generation controller can do over 50 MB/s in 4 KB random writes. That's firmly into territory where it's fast enough that benchmarks can tell the difference between 50 MB/s and 40 MB/s, but humans can't.

What you really need for random write performance is something markedly better than a hard drive. Realistically, 10 MB/s is enough, and faster than that really only shows up in benchmarks.

What solid state drives (and traditional hard drives) commonly do is to have some DRAM cache on board (similar to system memory), which is very fast. When the computer tries to write a file to the SSD, it stores the file in cache, and then actually writes it to the SSD later when it has time. But the write speeds from the controller to the NAND flash still matter, as what happens if it can't clear out cache fast enough and the cache fills up? If you've ever been using a browser with a bunch of tabs going and the browser completely locked up for a while, and then became responsive again eventually, cache filling up is the likely culprit.

Furthermore, what happens if your computer loses power before it can write the files from the cache onto the NAND flash? In that case, the files that you thought you had saved are gone. NAND flash will keep its data just fine if it loses power, but the data in DRAM is gone in a second or so. With an SSD that can actually write to the NAND flash quickly, data gets written almost as soon as it comes in, so this isn't a problem.

If an SSD controller has performance issues, having some DRAM to partially cover those up helps. But it's better to not have those performance issues in the first place, and not need the DRAM.

An SSD based on a good controller is a good SSD, and conversely, one based on a bad controller is a bad SSD. There are enough "bad" controllers out there that I'll just ignore them and list the good ones. Indilinx's "Barefoot" controller is the slowest of the "good" ones. It is good at everything and great at nothing, but most importantly, also mediocre or bad at nothing. Intel's second generation controller is roughly competitive with hard drives on sequential writes, but excellent at everything else. SandForce's and Marvell's new controllers are very good at everything, but new enough that there's still a risk of some major firmware issues. The Intel and Indilinx controllers had such firmware issues in the past, but they've long since been fixed.

Companies don't sell SSD controllers directly to the general public, of course. Rather, they sell the controllers to other companies that use them to assemble a completed solid state drive, and then they sell that to the public. That means a given SSD can be marketed by many different companies under a wide variety of names. It also means that most solid state drives won't tell you what controller the drive uses, which can make it hard to tell what you're getting.

To simplify things, I'll give a little table here to show what companies market SSDs with which controllers. For simplicity, I'll leave out the companies that sell both good SSDs and random junk that has no real hope of ever being good (Patriot and Super-Talent are the worst offenders here), regardless of what firmware changes the drive gets. The left column is the company name, the top row is the SSD controller, and the rest of the grid is what the SSD is named.

Indilinx   SandForce   Intel    Marvell Intel                            X25-M X25-V OCZ      Vertex      Vertex 2 Agility    Agility 2 Solid 2    Vertex LE Crucial   M225                             C300 Mushkin  Io          Callisto G.Skill  Falcon      Phoenix Falcon II

Note that a solid state drive should not be filled to more than about 80% full or else performance goes way down. A solid state drive won't let you fill it to more than about 93% full (it tells Windows that the extra capacity simply isn't there), so you can't completely shoot yourself in the foot, but don't buy an 80 GB SSD and plan on putting 80 GB of data on it. The exception is SandForce-based SSDs, which forcibly set aside enough spare area that they won't let you overfill them. This is both due to setting aside more spare area, and compressing the data without telling you, so even if you think you've filled up a 60 GB drive, there might only be 30 GB of NAND flash actually in use.

If you want to know how much the speed boost helps in practice, I can tell you that it makes a huge difference. With most programs, I click the icon and it opens, fully loaded, ready to go, in under one second. On my previous computer, the same programs would typically take several seconds to load. On my new computer with an SSD, after I type in my login password to log into Windows after booting the computer, I can start clicking on programs and using the computer within a few seconds, though it's a little slow while still loading programs at startup. If I sit and wait for Windows to load everything that it wants to, it's completely done and idle in about 10 seconds. On my previous computer with a traditional hard drive, the latter figure was about two minutes. In Guild Wars, I can load any zone in a few seconds, so I can map travel without having to sit there and wait for the zone to load. But the programs that benefit the most from the speed of an SSD are browsers, which are constantly reading and writing small files in the background as you do things.

The big drawback to a solid state drive is the price tag. They're currently around $3/GB, while hard drives tend to be closer to $0.10/GB. If you just read my whole write-up and said, that's awesome, but I can't afford one, then at least you have a preview of what's coming in the future. Give it a couple of years and prices will come way down.

Also note that even if you need 1 TB of storage space, you don't need a 1 TB SSD. Some people get a small SSD for the operating system and applications, and then a large hard drive for data where the speed doesn't matter much. Personally, I have a 120 GB SSD (OCZ Agility) and no hard drive.

If a solid state drive is too expensive, then at minimum, you should get a 7200 RPM hard drive. Western Digital's Caviar Black hard drives will reduce read and write latencies by about 20% compared to most 7200 RPM hard drives, with a price premium of maybe $10-$20 over other drives of similar capacity. That is a significant performance improvement if you want decent storage speed without SSD prices.

Power supply
Power supplies are more important than you might think. An inadequate power supply can cause all sorts of problems that are an awful pain to diagnose. Even if you're trying to put together a fairly cheap computer, don't go with a cheap junk power supply. It will cause more headaches than it's worth, and in the worst case, could fry everything else in your computer.

There are two major factors in getting a power supply. First, you need a power supply that can deliver enough power. Second, you want one that delivers the power well, with the proper voltages, little electrical noise, and high energy efficiency even in harsher than real-world conditions, and high quality parts unlikely to break with the passage of time.

Determining how much power you need is the easy part. Look up the Thermal Design Power (TDP) of your processor and video card. The processor is easy to find. For video cards, it can be somewhat harder, but you can look it up on the web sites of AMD or Nvidia. Wikipedia also lists the TDP of most recent video cards. Add the TDP of your processor and video card, and then add 100 W to compensate for everything else in your system added together, plus some headroom. And then get a power supply that is rated at more power than the number you come up with on the +12 V rail. If you think you'll upgrade to a more powerful processor and/or video card in the future, then you kind of have to guess at how much power you think your future parts will need.

Please note that it is the +12 V rail number that matters here, not the total power. Disreputable power supply manufacturers will sometimes list a total power rating, but then say that the amount available on the +12 V rail is 100 W or 200 W shy of the total. Nearly all power that a computer uses is on the +12 V rail, so that's really all that helps you.

For example, my processor has a TDP of 95 W, and my video card has a TDP of 151 W. I can compute 95+151+100 = 346. Thus, if I get a power supply rated at more than 346 W on the +12 V rail, I'm fine. A good quality 400 W power supply would be adequate for my computer. There's a decent case for having more headroom than that (the power supply I actually use is rated at 525 W total, with 480 W of that on the +12 V rail), but you really shouldn't get a power supply that offers more than double what you need, as energy efficiency drops way off below about 20% load, so you'll end up wasting a ton of energy at idle. Paying extra for a power supply that wastes energy without offering any benefit is a bad idea.

Power supply quality is harder to gauge. At minimum, you should get a power supply that is 80 PLUS certified. This guarantees that it meets some minimum standards as measured by an independent organization. A power supply that is 80 PLUS certified could still be mediocre, but the certification guarantees you that it won't be awful. The 80 PLUS organization has higher levels of bronze, silver, and gold certification, which denote higher energy efficiency, but not necessarily better quality in other respects.

You can find some good power supply reviews on Hardware Secrets or Jonny Guru. It's perhaps easier to say, just get a reputable brand such as Corsair, Antec, Enermax, or Seasonic. A power supply from a good brand that is rated at 400 W will actually be able to deliver 400 W under real world conditions. A power supply from a bad brand that is rated at 400 W might well fry if you try to draw 300 W from it under real world conditions.

Delivering the right voltage more precisely and with less electrical noise to various components will lengthen the lifetime of everything else in your computer. It can also avoid all sorts of weird problems that are an awful pain to diagnose. For most people, a good quality $40 power supply rated at around 400 W is the right thing to get. Some powerful gaming computers may need a 500 W or 600 W power supply, but you really don't need more than that unless you're going nuts with an SLI/CrossFireX setup or an extreme overclock. Don't get suckered in by a cheap junk power supply rated at 800 W with loose voltage regulation that will cause system instability.

Optical drive
You should get a DVD drive that can both read and write both DVDs and CDs. If you're replacing an old computer, you can likely grab the DVD drive out of the old computer and use it. There's no need for a Blu-Ray drive just yet unless you want to watch high definition Blu-Ray movies on your computer.

Case
There are three things to consider in the quality of a computer case. First, are you happy with how it looks? Some people are pickier than others in this regard. Second, does it have room for everything you need? And third, can it keep your computer adequately cool?

Aesthetics are a matter of opinion, so I can't really offer much advice on the first point other than to say, if you think a case looks hideous, then get a different one instead.

On the second point, I'd recommend getting at least a mid-tower case. Make sure that the case can accomodate an ATX motherboard, which is the most common standard. People going nuts with super high end setups that will put out tons of heat may want to go for a full tower case that can accomodate more fans, but a mid-tower is plenty big enough for most people.

Finally, there is the question of cooling. Loosely, you want a lot of big case fans. The usual setup is one or two fans blowing air in the front, one blowing air in the side right at the video card, and one or two blowing air out the top and/or back of the case. A case that mounts the power supply in the bottom of the case can have a fan blow air out the top, which is nice because hot air rises. The power supply will also have a fan that blows air out the back of the power supply. Some video cards will blow hot air out an expansion slot, which can be good for getting very hot air out of the case, but isn't a major factor in airflow.

Note also that more case fans means that you can run them slower, and still get adequate cooling. This results in a quieter computer.

Some cases ship with several fans, while others ship with several holes where you can add your own fans. Buying good fans can add up, so paying an extra $10 to get an otherwise identical case that comes with three extra fans can be an excellent deal.

Power protection
You should get a surge protector. This both protects your computer from getting fried if an unwanted surge of electricity comes through the outlet (e.g., if a nearby transformer is struck by lightning), and gives you more electrical outlets so that you can plug in all of the various components of your computer without having to scatter the among three different wall outlets. It's better to fry a $20 surge protector than a $1000 computer.

At the higher end, you can get a uninterruptible power supply (UPS), which is basically a battery backup for your computer. If you lose power for a few seconds, a UPS will kick in and keep the computer running without a hitch until power is restored. If you lose power for an extended period of time, the UPS can only keep the computer running until the battery runs down, but that should be plenty of time to save your work and turn off the computer properly.

If you do get a UPS, make sure that you get one that can deliver as much power as your computer will draw. If your computer is drawing 300 W when you lose power, and you have a UPS that can only deliver 200 W, bad things happen. You can estimate this by taking the figure needed above for a power supply, and then multiplying by 1.25 to compensate for power supply inefficiencies, and adding 40 W for each monitor you plug into the UPS.

Note that an uninterruptible power supply also functions as a surge protector, so you don't need to buy a surge protector separately. A low end UPS is a "standby" model, which basically doesn't do anything until you lose power, at which point it will kick in. A higher end UPS can be a "line-interactive" model, that will likely also be able to fix the problem without draining the battery if the current from the wall is the wrong voltage (say, in a brownout) or various other problems. At the high end is an "online" model of UPS, which is great for really mission critical things, but massively overkill for a consumer PC and also way too expensive.

Processor heatsink/fan
Most processors ship with a cheap heatsink and fan. You can get far superior cooling performance if you get an aftermarket heatsink and fan from another source. This will both be quieter and keep the processor cooler. For a processor that will run at the default speed, the stock heatsink and fan is good enough. If you're going to overclock your processor, you really should get something better. If you want to buy your own heatsink and fan, make sure it is compatible with the right processor socket, and make sure it will fit inside your case.

There are also some liquid cooling setups out there. The high end ones that cost hundreds of dollars do cool a processor better than air cooling. The cheaper liquid cooling setups aren't any better than air cooling for the same price, and are often worse for cooling performance. The big advantage of liquid cooling is that it is quiet, as it doesn't need a fan blowing on the heatsink. The big disadvantage is that it is expensive.

Sound card
Nearly all motherboards come with onboard sound, and onboard sound is good enough. Don't waste money on a discrete sound card unless you're really an audiophile.

Network card
Nearly all motherboards come with an onboard ethernet card, and that's usually good enough. Bigfoot is marketing their Killer Xeno Pro as a gaming network card that offers improved performance in games by offloading network traffic from the processor. There are certain problems where bad operating system or game code causes unnecessary lag, and a Killer Xeno Pro can indeed fix that lag. There are some cases of particularly bad network code that causes hitching or other frame rate drops, and a Killer Xeno Pro can fix that, too. But there are a lot of sources of lag that a Killer Xeno Pro can't help with, and won't be noticeably better than an onboard network card.

I've posted a review of how the Killer Xeno Pro does in Champions Online here. Sorry, I haven't done much testing with Guild Wars, as the game doesn't push my processor or video card hard enough for there to be any real chance of improvements.

Bigfoot has now released a Killer 2100 as their next generation of network cards.

Operating system
You should get Windows 7 Home Premium 64-bit, with only a handful of exceptions. If you need a feature that only the Professional edition offers (if you check the list and don't see something that you know you need, then you don't need it), then get Windows 7 Professional instead. If you need to run some very old 16-bit programs (loosely, anything released before the mid-1990s), then get the 32-bit version instead, as it won't run on a 64-bit operating system. If you have some old legacy hardware for which there is no 64-bit driver, then get the 32-bit version instead.

The big advantages of a 64-bit operating system are that it can run 64-bit programs (of which there aren't many yet, and those that exist probably also have a 32-bit version), and that it can address more than 4 GB of memory. A 32-bit operating system can only address 4 GB of memory. That's not just 4 GB of system memory. That's 4 GB for all memory added together, including system memory, video memory, and some other things. With a 32-bit operating system, you'll often be restricted to under 3 GB of system memory.

Speakers
You should get speakers. You might just keep what you already have from an old computer.

Keyboard
Yes, you need a keyboard. You might just keep what you already have from an old computer, as keyboards today aren't meaningfully better than keyboards were ten years ago. If you're keeping an old keyboard, check to see if it uses a PS/2 connection or USB. If it's a PS/2 keyboard, you'll need a motherboard with a PS/2 port to handle it.

There are wired keyboards and wireless keyboards. Wired is better. Wired is also cheaper. Wired can't have problems with the signal not getting through the way wireless can, unless you disconnect or cut the cord. Wired can also draw power through the cord, so there's no worry about a battery going dead.

Mouse
And you need a mouse, too. Again, you could just keep and use an old mouse. If it's a ball mouse, I'd recommend upgrading to an laser mouse. If you already have an optical or laser mouse that you're happy with, then keep it. Again, if you're keeping an old mouse, check to see whether it needs a PS/2 port or a USB port, and if it's PS/2, make sure your motherboard has the necessary port.

As with keyboards, there are both wired and wireless mice, and wired is better, for about the same reasons. Mouse companies are trying to come up with all sorts of reasons to justify buying a more expensive mouse, but most of them are stupid gimmicks. 1600 DPI is enough that you can readily point to the nearest pixel, so higher resolutions aren't helpful. Some people like extra buttons that you can accidentally click when you didn't mean to. There are mice that work on glass, or on mirrored surfaces. (Get a mousepad if you need that!) A cheap wired laser mouse is good enough.

Monitor
And of course you need a monitor. As with other peripherals, you could keep an old monitor if you have one. Note that LCD monitors tend to give a much better picture quality than older CRT monitors.

The physical size of the monitor and the resolution of the monitor are two separate things. Higher resolutions are what let you display more on the screen at a time. Higher resolutions also mean video cards have to do more work to render each frame in a game.

The aspect ratio also matters considerably. While monitors traditionally used something around 4:3, they're mostly moving to a "widescreen" aspect ratio of 16:10 or 16:9. Personally, I hate this, and call them "shortscreen" monitors, as that's a more descriptive moniker. I seem to be in the distinct minority on this.

Advocates of widescreen monitors say that they're better for first person shooters, as you can get some semblance of peripheral vision. I say that this only serves to highlight how artifically restricted a first person viewpoint is. Widescreen advocates also say that they're better for watching movies. I'll concede that point, and then point out that it's a computer monitor, not a television. 16:9 makes sense for a television, but the shape of a computer monitor should be something that makes sense for programs that you'll run on a computer.

Widescreen advocates also claim that the human eye naturally moves horizontally better than vertically. But again, this only means that you get a monitor designed to display things in places where there is nothing to see. Word processing, e-mail, and most web browsing are limited by the height of the monitor, not the width. Having to move your eyes back 20 inches to go to the next line of text is really awkward, which is why many web pages cap the width of text at several hundred pixels, no matter how wide your monitor is. A shorter, wider monitor gives you less useful space in such programs, not more. And it is shorter: a 19" 1280x1024 monitor is taller than a 24" 1920x1080 monitor, in spite of 19" being a lot less than 24".

For games, it varies considerably by game. In overhead view games, a display shape closer to square is better, to maximize the radius around the center that you can see at a time. I've played Guild Wars both at 1680x1050 and 1280x1024, and didn't like the former any better than the latter. People say that shorter and wider is better in first person shooters, but I don't play them, so I wouldn't know.

Widescreen monitor advocates say that a wider screen lets you run two programs next to each other, while only somewhat scrunched horizontally, rather than really squished as they would be on a monitor with a 4:3 or 5:4 aspect ratio. But better than only somewhat compressed is not compressed at all, and that's what you can get by running two programs on two different monitors, if you want to go that route.

Speaking of which, I'm all in favor of the multiple monitors approach. I bought a second monitor last summer, and within a week, I had to wonder how I had gotten by with only one monitor all these years. I put one monitor centered in front of me, where it was when I only had one monitor. The second monitor goes off to the left. That makes it easy to glance away from a game to check a wiki or update a spreadsheet. Even if you already have one monitor, you might want to think about getting a second, as it's tremendously useful if you use your computer much.

The picture quality of a monitor also matters. Unfortunately, it's hard to gauge online, as if you show a picture of a monitor online, the quality of that picture mostly depends on the monitor you're using at the time. This is the big advantage of going to a brick and mortar store to buy a monitor, and see the picture quality for yourself before you buy it.