google1
Tuesday, August 17, 2010
Corsair to release the H70
Water cooling is typically seen for an enthusiast, requiring pumps, reservoirs, tubing, know-how, and a cautious mind not to spill water all over your precious components. The benefits of water cooling are obvious to many – having your system run cooler, better stability at higher overclocks, and aesthetics. Lower down the order of water cooling, manufacturers like CoolerMaster, Corsair and Coolit have over the years come to the market with all-in-one solutions, requiring little knowledge to reap water cooling benefits. These early models were readily slated in reviews, for being more expensive than high-end air cooling, yet performing worse. It wasn’t until the Corsair H50 and H50-1 models came along that these all-in-one water coolers were taken seriously, because here was a product that performed as good as a high end air cooler, in certain situations quieter, could easily fit in many cases, and only for a small premium. So now Corsair is due to release the next model in their line – the Corsair H70.
ASUS Rampage III Formula to debut ‘soon’
ASUS’ Republic of Gamers range is soon to have a new member, in the shape of the ASUS Rampage III Formula. Using the X58 chipset, this board is designed for looks, uncompromised performance, overclocking, and the best possible online gaming experience with the new SupremeFX X-Fi 2 audio solution. However, based on our recent high-end X58 roundup, the X58 market is stagnating between the budget X58 and high end, where the minor features that few people end up using seem destined to create a huge markup price. ASUS hopes to alleviate such issues with the release of the Rampage III Formula, by finding a happy medium.
ASUS VG236H 23-inch 3D Display Review: 120Hz is the Future
Introduction
120Hz panels are definitely still market newcomers - in fact, look no further than Newegg, where there still isn’t a 120Hz category, much less a refresh rate field for drilling down products. The necessity for 120Hz panels arose entirely out of the ongoing 3D obsession across the entire consumer electronics segment, something that remains a difficult sell for many gamers. On a technical level, the necessity for 120Hz arises from the need to drive two discrete 60Hz images - one 60Hz image for each eye. In its current incarnation, consumer 3D technology relies primarily on active shutter glasses - parallax barrier 3D displays are still too expensive, and I’ve yet to see passive polarization methods used outside the movie theatre. But you probably already know most of the 3D story.
Though the 120Hz refresh frequency does make games playable in 3D, there’s another important benefit of using a faster refresh rate - everything looks smoother, and you can now drive up to 120 FPS without tearing. The ASUS VG236H was my first exposure to 120Hz refresh displays that aren’t CRTs, and the difference is about as subtle as a dump truck driving through your living room. I spent the first half hour seriously just dragging windows back and forth across the desktop - from a 120Hz display to a 60Hz, stunned at how smooth and different 120Hz was. Yeah, it’s that different.
If you’re the kind of person that cares about squeezing every last FPS out of your box - regardless of how you feel about 3D - don’t even bother reading the rest of this review, just run, don’t walk, to the store and get this 120Hz display. I’m serious.
ASUS’ VG236H isn’t perfect, like any product there are a few caveats. That aside, honestly, the completely unparalleled level of smoothness on a 120 Hz display has made me hyper attuned to just how flickery 60Hz looks on all the other LCDs I’ve got.
Oh and my initial skepticism about 3D? I’m still shocked about it, but I've completely changed my mind.
120Hz panels are definitely still market newcomers - in fact, look no further than Newegg, where there still isn’t a 120Hz category, much less a refresh rate field for drilling down products. The necessity for 120Hz panels arose entirely out of the ongoing 3D obsession across the entire consumer electronics segment, something that remains a difficult sell for many gamers. On a technical level, the necessity for 120Hz arises from the need to drive two discrete 60Hz images - one 60Hz image for each eye. In its current incarnation, consumer 3D technology relies primarily on active shutter glasses - parallax barrier 3D displays are still too expensive, and I’ve yet to see passive polarization methods used outside the movie theatre. But you probably already know most of the 3D story.
Though the 120Hz refresh frequency does make games playable in 3D, there’s another important benefit of using a faster refresh rate - everything looks smoother, and you can now drive up to 120 FPS without tearing. The ASUS VG236H was my first exposure to 120Hz refresh displays that aren’t CRTs, and the difference is about as subtle as a dump truck driving through your living room. I spent the first half hour seriously just dragging windows back and forth across the desktop - from a 120Hz display to a 60Hz, stunned at how smooth and different 120Hz was. Yeah, it’s that different.
If you’re the kind of person that cares about squeezing every last FPS out of your box - regardless of how you feel about 3D - don’t even bother reading the rest of this review, just run, don’t walk, to the store and get this 120Hz display. I’m serious.
ASUS’ VG236H isn’t perfect, like any product there are a few caveats. That aside, honestly, the completely unparalleled level of smoothness on a 120 Hz display has made me hyper attuned to just how flickery 60Hz looks on all the other LCDs I’ve got.
Oh and my initial skepticism about 3D? I’m still shocked about it, but I've completely changed my mind.
Apple Mac mini Review (Mid 2010)
Six years ago I tried using a Mac exclusively for 30 days. The OS was 10.3, the hardware was a PowerMac G5 and Apple was still the quirky company with a 2% market share.
Five years ago I reviewed my third Mac, the very first Mac mini. In this pre-hackintosh world, Apple was enough of a curiosity that a $499 Mac made a lot of sense. It wasn’t fast, but with a 1.25GHz PowerPC G4 it was quick enough for most of what you needed to do with a Mac back then. Like many Macs, all it really needed was a memory upgrade.
Interest in Apple has obviously gone up since then. Apple’s resurgence coincided with the shift from desktop to notebook computers and thus the preferred entry platform for many into the Mac world were the PowerBook G4, MacBook and MacBook Pro.
The original Mac mini
The mini continued to receive updates, but its role in Apple’s lineup shifted. The need for an introductory Mac so that users might test drive OS X declined. The mini became a plain old desktop Mac for those who didn’t want an integrated display. For others it was a nice looking HTPC; an Apple nettop before the term existed.
The Mac mini arrived with a bang but was quickly relegated to an almost niche product. It wasn’t Apple TV-bad, but definitely not in Apple’s top 3. The fact that Apple didn’t overhaul the chassis in nearly five years exemplifies the mini’s importance to Apple. Nearly all other consumer targeted Apple hardware gets visual updates more regularly than the Mac mini.
All signs pointed to the mini going the way of the dodo. A couple years ago we regularly saw rumors of Apple killing off the mini entirely. The need for an ultra cheap introduction to OS X had passed. Apple’s customers either wanted a notebook or an iPhone, and if they wanted a netbook Apple eventually addressed that market with the iPad.
While the role of the Mac mini has changed over the years, so has hardware. Originally the mini was 2” tall and measure 6.5” on each side. Small for its time, but bulky compared to what companies like Zotac have been able to do with off the shelf components since then.
In 2005 very few companies were concerned about power consumption, today it’s even more important than overall performance. Intel alone has an internal policy that doesn’t allow the introduction of any new feature into a design that doesn’t increase performance by at least 2% for every 1% increase in power consumption.
What we finally got, after years of waiting, was a redesigned Mac mini:
The 2010 Mac mini looks more like an Apple TV than a Mac. At 1.4” high the new mini doesn’t sound much thinner than the old one until you realize that most of the visible thickness (excluding the pedestal stand) is even smaller than that.
Yes I know the irony of using Blu-ray discs to show the thickness of the DVD-only Mac mini
The Apple TV comparison continues when you look at the ports along the back. Apple’s recent infatuation with mini DisplayPort continues, but there’s also an HDMI port on the back of the new mini. Apple thankfully provides a single-link DVI to HDMI adapter in the box for those of you who aren’t hooking the Mac mini up to a HDTV. The HDMI output supports a max resolution of 1920 x 1200 while the miniDP can drive a 2560 x 1600 display with an active miniDP to DVI adapter.
But it’s clear that the HDTV pair is something Apple thought of. The mini is no longer a way to get a taste of OS X, it’s a full fledged HTPC or Apple’s take on the ION nettop.
Internally the Mac mini is pretty much a 13-inch MacBook Pro. You get a 45nm 2.40GHz Core 2 Duo with a 3MB L2 cache (technically it’s the Core 2 Duo P8600). The chipset is NVIDIA’s GeForce 320M, identical to what’s used in the 13-inch MacBook Pro. There’s no dedicated frame buffer. The GPU carves 256MB of main memory out for its own use, which is a problem because the base configuration only ships with 2GB of memory.
The hardware may sound dated since it isn’t using Intel’s Core i3/i5 processors, but we’re limited by space. Apple is unwilling to ship any of its Macs with just Intel integrated graphics. Apple wants a huge installed base of Macs with OpenCL capable GPUs for some reason. And since NVIDIA isn’t allowed to build chipsets for the Core i-whatever processors, Apple would have to go to a three chip solution in order to have a Core i-whatever, Intel’s associated chipset and an AMD/NVIDIA GPU. In size constrained products (e.g. 13-inch MacBook Pro or the new Mac mini), Apple prefers to use a Core 2 generation CPU and a single chip NVIDIA IGP to fit the form factor and GPU requirements.
Five years ago I reviewed my third Mac, the very first Mac mini. In this pre-hackintosh world, Apple was enough of a curiosity that a $499 Mac made a lot of sense. It wasn’t fast, but with a 1.25GHz PowerPC G4 it was quick enough for most of what you needed to do with a Mac back then. Like many Macs, all it really needed was a memory upgrade.
Interest in Apple has obviously gone up since then. Apple’s resurgence coincided with the shift from desktop to notebook computers and thus the preferred entry platform for many into the Mac world were the PowerBook G4, MacBook and MacBook Pro.
The original Mac mini
The mini continued to receive updates, but its role in Apple’s lineup shifted. The need for an introductory Mac so that users might test drive OS X declined. The mini became a plain old desktop Mac for those who didn’t want an integrated display. For others it was a nice looking HTPC; an Apple nettop before the term existed.
The Mac mini arrived with a bang but was quickly relegated to an almost niche product. It wasn’t Apple TV-bad, but definitely not in Apple’s top 3. The fact that Apple didn’t overhaul the chassis in nearly five years exemplifies the mini’s importance to Apple. Nearly all other consumer targeted Apple hardware gets visual updates more regularly than the Mac mini.
All signs pointed to the mini going the way of the dodo. A couple years ago we regularly saw rumors of Apple killing off the mini entirely. The need for an ultra cheap introduction to OS X had passed. Apple’s customers either wanted a notebook or an iPhone, and if they wanted a netbook Apple eventually addressed that market with the iPad.
While the role of the Mac mini has changed over the years, so has hardware. Originally the mini was 2” tall and measure 6.5” on each side. Small for its time, but bulky compared to what companies like Zotac have been able to do with off the shelf components since then.
In 2005 very few companies were concerned about power consumption, today it’s even more important than overall performance. Intel alone has an internal policy that doesn’t allow the introduction of any new feature into a design that doesn’t increase performance by at least 2% for every 1% increase in power consumption.
What we finally got, after years of waiting, was a redesigned Mac mini:
The 2010 Mac mini looks more like an Apple TV than a Mac. At 1.4” high the new mini doesn’t sound much thinner than the old one until you realize that most of the visible thickness (excluding the pedestal stand) is even smaller than that.
Yes I know the irony of using Blu-ray discs to show the thickness of the DVD-only Mac mini
The Apple TV comparison continues when you look at the ports along the back. Apple’s recent infatuation with mini DisplayPort continues, but there’s also an HDMI port on the back of the new mini. Apple thankfully provides a single-link DVI to HDMI adapter in the box for those of you who aren’t hooking the Mac mini up to a HDTV. The HDMI output supports a max resolution of 1920 x 1200 while the miniDP can drive a 2560 x 1600 display with an active miniDP to DVI adapter.
But it’s clear that the HDTV pair is something Apple thought of. The mini is no longer a way to get a taste of OS X, it’s a full fledged HTPC or Apple’s take on the ION nettop.
Internally the Mac mini is pretty much a 13-inch MacBook Pro. You get a 45nm 2.40GHz Core 2 Duo with a 3MB L2 cache (technically it’s the Core 2 Duo P8600). The chipset is NVIDIA’s GeForce 320M, identical to what’s used in the 13-inch MacBook Pro. There’s no dedicated frame buffer. The GPU carves 256MB of main memory out for its own use, which is a problem because the base configuration only ships with 2GB of memory.
The hardware may sound dated since it isn’t using Intel’s Core i3/i5 processors, but we’re limited by space. Apple is unwilling to ship any of its Macs with just Intel integrated graphics. Apple wants a huge installed base of Macs with OpenCL capable GPUs for some reason. And since NVIDIA isn’t allowed to build chipsets for the Core i-whatever processors, Apple would have to go to a three chip solution in order to have a Core i-whatever, Intel’s associated chipset and an AMD/NVIDIA GPU. In size constrained products (e.g. 13-inch MacBook Pro or the new Mac mini), Apple prefers to use a Core 2 generation CPU and a single chip NVIDIA IGP to fit the form factor and GPU requirements.
Quad Xeon 7500, the Best Virtualized Datacenter Building Block?
21st Century Server Choices
Lots of people base their server form factor choice on what they are used to buying. Critical database applications equal a high-end server. Less critical applications: midrange server. High-end machines used to find a home at larger companies and cheaper servers would typically be attractive to SMEs. I am oversimplifying but those are the clichés that pop up when you speak of server choices.
Dividing the market into who should or should not buy high-end servers is so... 20th century. Server buying decisions today are a lot more flexible and exciting for those who keep an open mind. In the world of virtualization your servers are just resource pools of networking, storage and processing. Do you buy ten cheap 1U servers, four higher performance 2U, one “low cable count” blade chassis, or two high-end servers to satisfy the needs of your services?
A highly available service can be set up with cheap and simple server nodes, as Google and many others show us every day. On the flipside of the coin, you might be able to consolidate all your services on just a few high-end machines, reducing the management costs while at the same time taking advantage of the advanced RAS features these kind of machines offer. It takes a detailed study to determine which strategy is the best one for your particular situation, so we are not saying that one strategy is better than all the others. The point is that the choice between cheap clustered nodes and only a few high-end machines cannot be answered by simply looking at the size of the company you are working for or the "mission critical level" of your service. There are corner cases where the choice is clear, but that is not the case for the majority of virtualized datacenters.
So is buying high-end servers as opposed to buying two or three times more 2-socket systems an interesting strategy for your virtualized cluster if you are not willing to pay a premium for RAS features? Until very recently, the answer was simple: no. High-end quad socket systems were easily three times and more as expensive and never offered twice as much performance compared to dual socket systems. There are many reasons for that. If we focus on Intel, the MP series were always based on mature but not the cutting edge technology. Also, quad socket systems have more cache coherency overhead, and the engineering choices favor reliability and expandability over performance. That results in slower but larger memory subsystems and sometimes lower clock speeds too. The result was that the performance advantage of the quad system was in many cases minimal.
At the end of 2006, the Dual Xeon X5300 were more than a match for the Xeon X7200 quad systems. And recently, dual Xeon 5500 servers made the massive Xeon 7400 servers look slow. The most important reason why these high-end systems were still bought were the superior RAS features. Other reasons include the fact that some decision makers never really bothered to read the benchmarks carefully and simply assumed that a quad system would automatically be faster since that is what the OEM account manager told them. You cannot even blame them: a modern CIO has to bury his head in financial documents, must solve HR problems, and is constantly trying to explain to the upper management why the complex IT sytems are not aligned with the business goals. Getting the CIO down from the “management penthouse” to the “cave down under”, also called the datacenter, is no easy task. But I digress.
Virtualization can shatter the old boundaries between the midrange and high-end servers. They can be interesting for the rest of us, the people that do not normally consider these high-end expensive systems. The condition is that the high-end systems can consolidate more services than the dual socket systems, so performance must be much better. How much better? If we just focus on capital investment, we get the figures below.
So these numbers seem to suggest that we need 2.5 to 3 times better performance. In reality, that does not need to be the case. The TCO of two high-end servers is most likely a bit better than that of four midrange servers. The individual components like the PSU, fans, and motherboard should be more reliable and thus result in less downtime and less time spent on replacing those components. Even if that is not the case, it is statistically more likely that a component fails in a cluster with more servers, and thus more components. Less cables and less hypervisor updates should also help. Of course, the time spent in managing the VMs is probably more or less the same.
While a full TCO calculation is not the goal of this article, it is pretty clear to us that a high-end system should outperform the midrange dual socket systems by at least a factor two to be an economical choice in a virtualization cluster where hardware RAS capabilities are not the only priority. There is a strong trend that the availability of the (virtual) machine is guaranteed by easy to configure and relatively cheap software techniques such as VMware’s HA and fault tolerance. The availability of your service is then guaranteed by using application level high availability such as Microsoft’s clustering services, load balanced web servers, Oracle fail-over, and other similar (but still affordable) techniques.
The ultimate goal is not keeping individual hardware running but keeping your services running. Of course hardware that fails too frequently will place a lot of stress on the rest of your cluster, so that is another reason to consider this high-end hardware... if it delivers price/performance wise. Let us take a closer look at the hardware.
Lots of people base their server form factor choice on what they are used to buying. Critical database applications equal a high-end server. Less critical applications: midrange server. High-end machines used to find a home at larger companies and cheaper servers would typically be attractive to SMEs. I am oversimplifying but those are the clichés that pop up when you speak of server choices.
Dividing the market into who should or should not buy high-end servers is so... 20th century. Server buying decisions today are a lot more flexible and exciting for those who keep an open mind. In the world of virtualization your servers are just resource pools of networking, storage and processing. Do you buy ten cheap 1U servers, four higher performance 2U, one “low cable count” blade chassis, or two high-end servers to satisfy the needs of your services?
A highly available service can be set up with cheap and simple server nodes, as Google and many others show us every day. On the flipside of the coin, you might be able to consolidate all your services on just a few high-end machines, reducing the management costs while at the same time taking advantage of the advanced RAS features these kind of machines offer. It takes a detailed study to determine which strategy is the best one for your particular situation, so we are not saying that one strategy is better than all the others. The point is that the choice between cheap clustered nodes and only a few high-end machines cannot be answered by simply looking at the size of the company you are working for or the "mission critical level" of your service. There are corner cases where the choice is clear, but that is not the case for the majority of virtualized datacenters.
So is buying high-end servers as opposed to buying two or three times more 2-socket systems an interesting strategy for your virtualized cluster if you are not willing to pay a premium for RAS features? Until very recently, the answer was simple: no. High-end quad socket systems were easily three times and more as expensive and never offered twice as much performance compared to dual socket systems. There are many reasons for that. If we focus on Intel, the MP series were always based on mature but not the cutting edge technology. Also, quad socket systems have more cache coherency overhead, and the engineering choices favor reliability and expandability over performance. That results in slower but larger memory subsystems and sometimes lower clock speeds too. The result was that the performance advantage of the quad system was in many cases minimal.
At the end of 2006, the Dual Xeon X5300 were more than a match for the Xeon X7200 quad systems. And recently, dual Xeon 5500 servers made the massive Xeon 7400 servers look slow. The most important reason why these high-end systems were still bought were the superior RAS features. Other reasons include the fact that some decision makers never really bothered to read the benchmarks carefully and simply assumed that a quad system would automatically be faster since that is what the OEM account manager told them. You cannot even blame them: a modern CIO has to bury his head in financial documents, must solve HR problems, and is constantly trying to explain to the upper management why the complex IT sytems are not aligned with the business goals. Getting the CIO down from the “management penthouse” to the “cave down under”, also called the datacenter, is no easy task. But I digress.
Virtualization can shatter the old boundaries between the midrange and high-end servers. They can be interesting for the rest of us, the people that do not normally consider these high-end expensive systems. The condition is that the high-end systems can consolidate more services than the dual socket systems, so performance must be much better. How much better? If we just focus on capital investment, we get the figures below.
Type | Server | CPUs | Memory | Approx. Price |
---|---|---|---|---|
Midrange | Dell R710 | 2x X5670 | 18 x 4GB = 72GB | $9000 |
Midrange | Dell R710 | 2x X5670 | 16 x 8GB = 128GB | $13000 |
High-end | Dell R910 | 4x X7550 | 64 x 4GB = 256GB | $32000 |
So these numbers seem to suggest that we need 2.5 to 3 times better performance. In reality, that does not need to be the case. The TCO of two high-end servers is most likely a bit better than that of four midrange servers. The individual components like the PSU, fans, and motherboard should be more reliable and thus result in less downtime and less time spent on replacing those components. Even if that is not the case, it is statistically more likely that a component fails in a cluster with more servers, and thus more components. Less cables and less hypervisor updates should also help. Of course, the time spent in managing the VMs is probably more or less the same.
While a full TCO calculation is not the goal of this article, it is pretty clear to us that a high-end system should outperform the midrange dual socket systems by at least a factor two to be an economical choice in a virtualization cluster where hardware RAS capabilities are not the only priority. There is a strong trend that the availability of the (virtual) machine is guaranteed by easy to configure and relatively cheap software techniques such as VMware’s HA and fault tolerance. The availability of your service is then guaranteed by using application level high availability such as Microsoft’s clustering services, load balanced web servers, Oracle fail-over, and other similar (but still affordable) techniques.
The ultimate goal is not keeping individual hardware running but keeping your services running. Of course hardware that fails too frequently will place a lot of stress on the rest of your cluster, so that is another reason to consider this high-end hardware... if it delivers price/performance wise. Let us take a closer look at the hardware.
Dropcam Echo : Home Security Meets the Cloud
The last couple of years have seen the introduction of many security cameras aimed at the consumer market. Security and surveillance cameras used to be restricted to professional scenarios, and were primarily analog in nature. However, with advances in networking and the appearance of cheaper hardware, there is a shift towards the IP variety. The appearance of IP cameras has also brought in its wake units targeted at the consumer market.
In the IP camera space, large companies such as Bosch, Axis, Sony and Panasonic focus on hardware for professional security surveillance. The large companies make the security cameras, while large peripheral companies like Logitech and D-Link have solutions that make use of the local computer. D-Link, Linksys and Logitech have sub-$300 IP cameras meant for small offices and homes. They have been recently joined by companies like Avaak and Dropcam, which bring more ease of use to the table. These are startups with a focus on usage of IP cameras for casual monitoring.
As we are covering the Dropcam Echo today, let us take a brief look at the company.
Started in January 2009 by two ex-Xobni engineers, Greg Duffy and Aamir Virani, Dropcam has a team of 5 based out of San Francisco, California. The story behind the founding of the company makes for interesting reading. It clearly brings out the reason as to why consumer IP cameras have not gone mainstream yet.
Greg's dad, based in Texas, apparently bought an IP camera from a local electronics shop and spent four hours trying to set it up. After having little luck, he called up Greg and they worked on it for another few hours. It took a lot of router and network tweaking, but the camera finally came online. A couple of days later, Greg's dad called again and said now he wanted to watch the video while he was at work. The problem with most consumer IP cameras is that they concentrate on features which are important for the industrial sector, where setup is performed by trained professionals. The average consumer prefers a plug and play solution, and the expectations are quite different too. Keeping these in mind, Greg and Aamir founded Dropcam in early 2009. A seed round was led by Mitch Kapor (founder of Lotus), David Cowan (founder of Verisign, venture capitalist), and Aydin Senkut (ex-Googler).
Now that we know about the company, let us proceed to look closer at their second product, the Dropcam Echo.
In the IP camera space, large companies such as Bosch, Axis, Sony and Panasonic focus on hardware for professional security surveillance. The large companies make the security cameras, while large peripheral companies like Logitech and D-Link have solutions that make use of the local computer. D-Link, Linksys and Logitech have sub-$300 IP cameras meant for small offices and homes. They have been recently joined by companies like Avaak and Dropcam, which bring more ease of use to the table. These are startups with a focus on usage of IP cameras for casual monitoring.
As we are covering the Dropcam Echo today, let us take a brief look at the company.
Started in January 2009 by two ex-Xobni engineers, Greg Duffy and Aamir Virani, Dropcam has a team of 5 based out of San Francisco, California. The story behind the founding of the company makes for interesting reading. It clearly brings out the reason as to why consumer IP cameras have not gone mainstream yet.
Greg's dad, based in Texas, apparently bought an IP camera from a local electronics shop and spent four hours trying to set it up. After having little luck, he called up Greg and they worked on it for another few hours. It took a lot of router and network tweaking, but the camera finally came online. A couple of days later, Greg's dad called again and said now he wanted to watch the video while he was at work. The problem with most consumer IP cameras is that they concentrate on features which are important for the industrial sector, where setup is performed by trained professionals. The average consumer prefers a plug and play solution, and the expectations are quite different too. Keeping these in mind, Greg and Aamir founded Dropcam in early 2009. A seed round was led by Mitch Kapor (founder of Lotus), David Cowan (founder of Verisign, venture capitalist), and Aydin Senkut (ex-Googler).
Now that we know about the company, let us proceed to look closer at their second product, the Dropcam Echo.
Micron Announces RealSSD P300, SLC SSD for Enterprise
Buying an SSD for your notebook or desktop is nice. You get more consistent performance. Applications launch extremely fast. And if you choose the right SSD, you really curb the painful slowdown of your PC over time. I’ve argued that an SSD is the single best upgrade you can do for your computer, and I still believe that to be the case. However, at the end of the day, it’s a luxury item. It’s like saying that buying a Ferrari will help you accelerate quicker. That may be true, but it’s not necessary.
In the enterprise world however, SSDs are even more important. Our own Johan de Gelas had his first experience with an SSD in one of his enterprise workloads a year ago. This OLTP test looks at the performance difference between using 15K RPM SAS drives for a database server. Johan experimented with using SAS HDDs vs. SSDs for both data and log drives in the server.
Using a single SSD (Intel’s X25-E) for a data drive and a single SSD for a log drive is faster than running eight 15,000RPM SAS drives in RAID 10 plus another two in in RAID 0 as a logging drive.
Not only is performance higher, but total power consumption is much lower. Under full load eight SAS drives use 153W, compared to 2 - 4W for a single Intel X25-E. There are also reliability benefits. While mechanical storage requires redundancy in case of a failed disk, SSDs don’t. As long as you’ve properly matched your controller, NAND and ultimately your capacity to your workload, an SSD should fail predictably.
The overwhelming number of poorly designed SSDs on the market today is one reason most enterprise customers are unwilling to consider SSDs. The high margins available in the enterprise market is the main reason SSD makers are so eager to conquer it.
Micron’s Attempt
Just six months ago we were first introduced to Crucial’s RealSSD C300. Not only was it the first SSD we tested with a native 6Gbps SATA interface, but it was also one of the first to truly outperform Intel across the board. A few missteps later and we found the C300 to be a good contender, but our second choice behind SandForce based drives like the Corsair Force or OCZ Vertex 2.
Earlier this week Micron, Crucial’s parent company, called me up to talk about a new SSD. This drive would only ship under the Micron name as it’s aimed squarely at the enterprise market. It’s the Micron RealSSD P300.
The biggest difference between the P300 and the C300 is that the former uses SLC (Single Level Cell) NAND Flash instead of MLC NAND. As you may remember from my earlier SSD articles, SLC and MLC NAND are nearly identical - they just store different amounts of data per NAND cell (1 vs. 2).
SLC (left) vs. MLC (right) NAND
The benefits of SLC are higher performance and a longer lifespan. The downside is cost. SLC NAND is at least 2x the price of MLC NAND. You take up the same die area as MLC but you get half the storage. It’s also produced in lower quantities so you get at least twice the cost.
Micron wouldn’t share pricing but it expects drives to be priced under $10/GB. That’s actually cheaper than Intel’s X25-E, despite being 2 - 3x more than what we pay for consumer MLC drives. Even if we’re talking $9/GB that’s a bargain for enterprise customers if you can replace a whole stack of 15K RPM HDDs with just one or two of these.
The controller in the P300 is nearly identical to what was in the C300. The main differences are two fold. First, the P300’s controller supports ECC/CRC from the controller down into the NAND. Micron was unable to go into any more specifics on what was protected via ECC vs. CRC. Secondly, in order to deal with the faster write speed of SLC NAND, the P300’s internal buffers and pathways operate at a quicker rate. Think of the P300’s controller as a slightly evolved version of what we have in the C300, with ECC/CRC and SLC NAND support.
The C300
The rest of the controller specs are identical. We still have the massive 256MB external DRAM and unchanged cache size on-die. The Marvell controller still supports 6Gbps SATA although the P300 doesn’t have SAS support.
The P300 will be available in three capacities: 50GB, 100GB and 200GB. The drives ship with 64GB, 128GB and 256GB of SLC NAND on them by default. Roughly 27% of the drive capacity is designated as spare area for wear leveling and bad block replacement. This is in line with other enterprise drives like the original 50/100/200GB SandForce drives and the Intel X25-E. Micron’s P300 datasheet seems to imply that the drive will dynamically use unpartitioned LBAs as spare area. In other words, if you need more capacity or have a heavier workload you can change the ratio of user area to spare area accordingly.
Micron shared some P300 performance data with me:
The data looks good, but I’m working on our Enterprise SSD test suite right now so I’ll hold off any judgment until we get a drive to test. Micron is sampling drives today and expects to begin mass production in October.
In the enterprise world however, SSDs are even more important. Our own Johan de Gelas had his first experience with an SSD in one of his enterprise workloads a year ago. This OLTP test looks at the performance difference between using 15K RPM SAS drives for a database server. Johan experimented with using SAS HDDs vs. SSDs for both data and log drives in the server.
Using a single SSD (Intel’s X25-E) for a data drive and a single SSD for a log drive is faster than running eight 15,000RPM SAS drives in RAID 10 plus another two in in RAID 0 as a logging drive.
Not only is performance higher, but total power consumption is much lower. Under full load eight SAS drives use 153W, compared to 2 - 4W for a single Intel X25-E. There are also reliability benefits. While mechanical storage requires redundancy in case of a failed disk, SSDs don’t. As long as you’ve properly matched your controller, NAND and ultimately your capacity to your workload, an SSD should fail predictably.
The overwhelming number of poorly designed SSDs on the market today is one reason most enterprise customers are unwilling to consider SSDs. The high margins available in the enterprise market is the main reason SSD makers are so eager to conquer it.
Micron’s Attempt
Just six months ago we were first introduced to Crucial’s RealSSD C300. Not only was it the first SSD we tested with a native 6Gbps SATA interface, but it was also one of the first to truly outperform Intel across the board. A few missteps later and we found the C300 to be a good contender, but our second choice behind SandForce based drives like the Corsair Force or OCZ Vertex 2.
Earlier this week Micron, Crucial’s parent company, called me up to talk about a new SSD. This drive would only ship under the Micron name as it’s aimed squarely at the enterprise market. It’s the Micron RealSSD P300.
The biggest difference between the P300 and the C300 is that the former uses SLC (Single Level Cell) NAND Flash instead of MLC NAND. As you may remember from my earlier SSD articles, SLC and MLC NAND are nearly identical - they just store different amounts of data per NAND cell (1 vs. 2).
SLC (left) vs. MLC (right) NAND
The benefits of SLC are higher performance and a longer lifespan. The downside is cost. SLC NAND is at least 2x the price of MLC NAND. You take up the same die area as MLC but you get half the storage. It’s also produced in lower quantities so you get at least twice the cost.
SLC NAND flash | MLC NAND flash | |
Random Read | 25 µs | 50 µs |
Erase | 2ms per block | 2ms per block |
Programming | 250 µs | 900 µs |
Micron wouldn’t share pricing but it expects drives to be priced under $10/GB. That’s actually cheaper than Intel’s X25-E, despite being 2 - 3x more than what we pay for consumer MLC drives. Even if we’re talking $9/GB that’s a bargain for enterprise customers if you can replace a whole stack of 15K RPM HDDs with just one or two of these.
The controller in the P300 is nearly identical to what was in the C300. The main differences are two fold. First, the P300’s controller supports ECC/CRC from the controller down into the NAND. Micron was unable to go into any more specifics on what was protected via ECC vs. CRC. Secondly, in order to deal with the faster write speed of SLC NAND, the P300’s internal buffers and pathways operate at a quicker rate. Think of the P300’s controller as a slightly evolved version of what we have in the C300, with ECC/CRC and SLC NAND support.
The C300
The rest of the controller specs are identical. We still have the massive 256MB external DRAM and unchanged cache size on-die. The Marvell controller still supports 6Gbps SATA although the P300 doesn’t have SAS support.
Micron P300 Specifications | |||||
50GB | 100GB | 200GB | |||
Formatted Capacity | 46.5GB | 93.1GB | 186.3GB | ||
NAND Capacity | 64GB SLC | 128GB SLC | 256GB SLC | ||
Endurance (Total Bytes Written) | 1 Petabyte | 1.5 Petabytes | 3.5 Petabytes | ||
MTBF | 2 million device hours | 2 million device hours | 2 million device hours | ||
Power Consumption | < 3.8W | < 3.8W | < 3.8W |
The P300 will be available in three capacities: 50GB, 100GB and 200GB. The drives ship with 64GB, 128GB and 256GB of SLC NAND on them by default. Roughly 27% of the drive capacity is designated as spare area for wear leveling and bad block replacement. This is in line with other enterprise drives like the original 50/100/200GB SandForce drives and the Intel X25-E. Micron’s P300 datasheet seems to imply that the drive will dynamically use unpartitioned LBAs as spare area. In other words, if you need more capacity or have a heavier workload you can change the ratio of user area to spare area accordingly.
Micron shared some P300 performance data with me:
Micron P300 Performance Specifications | ||||
Peak | Sustained | |||
4KB Random Read | Up to 60K IOPS | Up to 44K IOPS | ||
4KB Random Write | Up to 45.2K IOPS | Up to 16K IOPS | ||
128KB Sequential Read | Up to 360MB/s | Up to 360MB/s | ||
128KB Sequential Write | Up to 275MB/s | Up to 255MB/s |
The data looks good, but I’m working on our Enterprise SSD test suite right now so I’ll hold off any judgment until we get a drive to test. Micron is sampling drives today and expects to begin mass production in October.
Ask and You Shall Receive: GPU Bench is Live
One day I got the bright idea to benchmark the living crap out of everything I could find. What resulted was a huge Excel sheet of CPU performance results. Then Intel released the X25-M and I realized that I would have much more repeatable and reliable numbers if I used SSDs (don't have to worry about defragging between runs), at which point I re-ran everything in the Excel sheet.
To make a long story short, we launched a feature called Bench. It's a comparison tool that lets you pit products against one another using our own internal test results. If you want to find out whether the Core i5 750 will be a significant upgrade from your Core 2 Quad Q6600 you can head over to Bench and find out. We have over 100 CPUs in Bench today across over 20 benchmarks. CPUs are being added all the time as they come out and we're constantly evaluating new benchmarks to introduce as well.
When I'm not testing CPUs, working with Brian on smartphones or playing with Mac gear, I'm knee deep in SSDs. I've been itching to write a follow-on to the SSD Relapse, however not enough has changed just yet. Plus with all that's happening in the other segments I cover directly, it's easier for me to focus on shorter SSD articles. Adding SSD performance data to Bench was an obvious next step, which I made not too long ago.
You all have been asking for three things when it comes to Bench fairly consistently. You want the ability to have all benchmarks sorted the same way (e.g. higher is better), the ability to compare more than two products and you want a GPU version of Bench. Today I'm happy to announce that the first version of GPU Bench is live.
We've tweaked the landing page for Bench a bit to let you access CPU, SSD and GPU Bench data even easier. As is the case with CPU and SSD Bench, as new cards get released we'll be expanding the GPU Bench database to include them. At present we go back as far as the GeForce 8800 GT and Radeon HD 3870 (at 1680 x 1050).
I hope you enjoy the addition and expect more Bench features to surface as the year goes on. As always, thanks for reading :)
HP EliteBook 8440w: On-the-Go Workstation
HP's business-centric EliteBooks have been around since 2008 in name, but in reality, EliteBook is just a new name for the old HP Compaq business notebook line. With HP releasing a flood of popular entry level and mainstream consumer notebooks with both HP and Compaq labels, this understandably created a marketing issue for the costlier and higher end business and workstation class machines. Since the HP Compaq brand didn't have the name cachet of the iconic IBM/Lenovo ThinkPads or even Dell's Latitude business notebooks, HP's marketing team decided to scrap the confusing "HP Compaq" tag entirely and rebrand their business notebooks as EliteBooks.
We have one of the newest EliteBooks here today, the EliteBook 8440w mobile workstation. For a 14" notebook, it's quite the powerhouse, with a Core i7-620M processor and Nvidia's Quadro FX 380M discrete graphics chip to go along with 4GB of memory, a 320GB SATA hard drive, integrated DVD burner, and a high resolution 14" 1600x900 screen—it's even got a matte finish! But for the $1649 pricetag, the 8440w could have used a bit more power on either the CPU or GPU side, with a quad-core Core i7 (which is an optional extra) or faster Quadro graphics card at the top of our wishlist.
But even without a quad-core or a high end GPU, the 8440w is a pretty formidable beast, boasting enough computing horsepower to acquit itself well for mobile CAD work and most reasonable tasks. Obviously, it won't replace the power of a workstation-class desktop or anything like that, but is it good enough for on-the-go design work? Let's find out.
We have one of the newest EliteBooks here today, the EliteBook 8440w mobile workstation. For a 14" notebook, it's quite the powerhouse, with a Core i7-620M processor and Nvidia's Quadro FX 380M discrete graphics chip to go along with 4GB of memory, a 320GB SATA hard drive, integrated DVD burner, and a high resolution 14" 1600x900 screen—it's even got a matte finish! But for the $1649 pricetag, the 8440w could have used a bit more power on either the CPU or GPU side, with a quad-core Core i7 (which is an optional extra) or faster Quadro graphics card at the top of our wishlist.
HP EliteBook 8440w Specifications | |
Processor | Intel Core i7-620M (2.66GHz, 32nm, 4MB L3, 35W) |
Chipset | Intel QM57 Express |
Memory | 2x2048MB DDR3-1333 Max 2x4GB DDR3-1333 |
Graphics | NVIDIA Quadro FX 380M (512MB GDDR3 VRAM) |
Display | 14.0" LED Backlit Matte WXGA+ (1600x900) |
Hard Drive | 2.5" 320GB 7200RPM SATA (Seagate ST9320423AS) |
Networking | Intel 82577LM PCI-E Gigabit Ethernet Intel Centrino Ultimate-N 6300 (3x3) 802.11a/b/g/n |
Audio | Realtek AL269 2-Channel HD Audio (2.0 Speakers with headphone/microphone jacks) |
Battery | 9-cell Li-Ion, 100 Wh |
Front Side | SD/MMC card reader |
Left Side | 3 x USB 2.0 1 x Firewire 1394a |
Right Side | RJ-11 Gigabit Ethernet eSATA/USB combination |
Back Side | VGA DisplayPort AC Power Connection Kensington Lock |
Operating System | Windows 7 Professional 64-bit |
Dimensions | 13.21" x 9.30" x 1.23" (WxDxH) |
Weight | Starting at 4.9 lbs (with 6-cell battery) |
Extras | Bluetooth 2.0 2.0MP Webcam Integrated TrackPoint Multitouch Touchpad SD/MMC/MS Pro Flash reader |
Warranty | 3-year warranty, onsite repairs 1-year battery warranty |
Pricing | 8440w-FN093UT for $1649 from HP Business |
But even without a quad-core or a high end GPU, the 8440w is a pretty formidable beast, boasting enough computing horsepower to acquit itself well for mobile CAD work and most reasonable tasks. Obviously, it won't replace the power of a workstation-class desktop or anything like that, but is it good enough for on-the-go design work? Let's find out.
WD VelociRaptor 600GB now in SSD Bench
We've had great feedback to the launch of GPU Bench and Bench in general - I'd like to extend a personal thank you to everyone who took the time to comment or write with suggestions on how to make Bench better. This is ultimately your site, we work for you, so I really do appreciate you guys being active in all of this - it makes my job a lot easier :)
The top requests we've seen are for things like mobile CPUs/GPUs, pricing data and of course a few HDDs in the SSD Bench. Rest assured that virtually everything you've asked for is on the to-do list and you can expect to start seeing some of that before the end of the year. Something that was very easy for me to do however was to add the WD VelociRaptor VR200M to our SSD Bench results.
HDD Bench is in the works and I'd prefer to keep the two separate for now, but I figure the fastest desktop HDD on the planet being in SSD Bench should be enough to give people an idea of how the SSD vs. HDD comparison stacks up.
It's back to work for me, I just wanted to let you all know that your voice is definitely heard and appreciated.
The top requests we've seen are for things like mobile CPUs/GPUs, pricing data and of course a few HDDs in the SSD Bench. Rest assured that virtually everything you've asked for is on the to-do list and you can expect to start seeing some of that before the end of the year. Something that was very easy for me to do however was to add the WD VelociRaptor VR200M to our SSD Bench results.
HDD Bench is in the works and I'd prefer to keep the two separate for now, but I figure the fastest desktop HDD on the planet being in SSD Bench should be enough to give people an idea of how the SSD vs. HDD comparison stacks up.
It's back to work for me, I just wanted to let you all know that your voice is definitely heard and appreciated.
The Dell Streak Review
The iPhone has an unusual problem. Its UI is fast and smooth enough that you want to browse the web on it. However the device is cramped enough that you don’t want use it for any serious web browsing. If you’re just looking to quickly read something it’s ok but logging in to websites, or interacting with a more complex web app is just a pain on a screen that small - regardless of how fast the device is. Apple’s solution is to turn you toward apps, or sell you an iPad. HTC and Motorola provided an alternative: increase the screen size of their smartphones.
Dell took it one step further, and for some reason called it the Streak.
When I first laid hands on the Dell Streak (originally called the Dell mini 5), I had been struggling with editing an AnandTech article on the iPhone 3GS in a Las Vegas cab. I believe the first words I uttered were - I would totally carry this.
The 5” diagonal screen gave me enough screen real estate that interacting with web pages isn’t a pain. It’s not the same experience you’d get with an iPad, but it’s maybe 60% there. And unlike the iPad, the Streak can double as a cell phone. Text is big enough where I found myself reading PDFs, emails and web pages more on the Streak than I would on any other smartphone. It felt like a true productivity device. Small enough to carry in my pocket, but large enough for me to get work done.
There’s a clear benefit to having a larger device - it’s easier to use. Text is easier to read, web pages are easier to navigate and presumably the keyboard is more pleasant to type on.
While the Streak definitely enables the first item on the list, the rest aren’t as clean cut.
Size Matters
I wrote in the EVO 4G review that the EVO wasn’t that big. Well, the Streak is. It dwarfs even the Motorola Droid X.
From left to right - Dell Streak, Motorola Droid X, Apple iPhone 4
The Streak is very well built. I’d argue that it’s up there with the Nexus One in terms of build quality. There’s no learning curve for Dell here. The design, styling and build quality are all top notch.
The Streak is thin. It’s the only thing that makes the 5” screen size acceptable. If it were any bulkier the device would be a pain to carry, but at 0.39“ (9.98mm) thin there’s potential here.
I put tons of talk time on the Streak, using it as my only phone, and the size is a non-issue for using it as a cell phone. Granted you look absolutely ridiculous holding it up to your head, but it works and it isn’t uncomfortable. If anything, it’s more comfortable resting the Streak on your shoulder and putting it up to your ear than a normal smartphone since there’s so much more surface area. This is largely due to the fact that despite the Streak’s size, it only weighs 7.76 ounces (220g) and is thinner than a Nexus One.
From left to right - Google Nexus One, Dell Streak, Apple iPhone 4
The Streak is pocketable if you’re a guy that wears normal pants. Womens’ pants aren’t quite as accommodating, but practical purses will house the Streak without complaining. While I love using the Streak, I’m not a fan of carrying it places. It doesn’t feel heavy in my pockets, it just feels big. And there usually aren’t good places to keep the Streak in the car. It’s easier to carry around than an iPad for sure, but not compared to the EVO 4G or Droid X.
My only complaint about the design are the physical buttons on the Streak. They are all nondescript. The worst is the physical shutter button which attempts to mimic a digital camera’s shutter release by focusing when you have it pressed down half way. That part works fine, but try to push it down all the way to activate the shutter and take a picture and you’ll find that you have to push down way too far. The button actually has to recess into the Streak’s housing to trigger a photo, which is not only awkward to do but also tends to cause you to move the phone a bit before you take your photo.
There’s more than enough surface area for Dell to have used beefier buttons on the Streak. These slender creatures seem better fit for something Nexus One sized. They don’t work with the design in my opinion.
Physical Comparison | |||||||||
Apple iPhone 4 | Apple iPhone 3GS | Dell Streak | HTC EVO 4G | Motorola Droid X | |||||
Height | 115.2 mm (4.5") | 115 mm (4.5") | 152.9 mm (6.02") | 121.9 mm (4.8") | 127.5 mm (5.02") | ||||
Width | 58.6 mm (2.31") | 62.1 mm (2.44") | 79.1 mm (3.11") | 66.0 mm (2.6") | 66.5 mm (2.62") | ||||
Depth | 9.3 mm ( 0.37") | 12.3 mm (0.48") | 9.98 mm (0.39") | 12.7 mm (0.5") | 9.9 mm (0.39") | ||||
Weight | 137 g (4.8 oz) | 133 g (4.7 oz) | 220 g (7.76 oz) | 170 g (6.0 oz) | 155 g (5.47 oz) | ||||
CPU | Apple A4 @ ~800MHz | Apple/Samsung A3 @ 600MHz | Qualcomm Scorpion @ 1GHz | Qualcomm Scorpion @ 1GHz | TI OMAP 3630 @ 1GHz | ||||
GPU | PowerVR SGX 535 | PowerVR SGX 535 | Adreno 200 | Adreno 200 | PowerVR SGX 530 | ||||
RAM | 512MB LPDDR1 (?) | 256MB LPDDR1 | 512MB LPDDR1 | 512MB LPDDR1 | 512MB LPDDR1 | ||||
NAND | 16GB or 32GB integrated | 16 or 32GB integrated | 16GB micro SD + 2GB integrated | 8GB micro SD | 8GB micro SD | ||||
Camera | 5MP with LED Flash + Front Facing Camera | 3MP | 5MP with dual LED Flash + Front Facing Camera | 8MP with dual LED Flash + Front Facing Camera | 8MP with dual LED Flash | ||||
Screen | 3.5" 640 x 960 LED backlit LCD | 3.5" 320 x 480 | 5" 800 x 480 | 4.3" 480 x 800 | 4.3" 480 x 854 | ||||
Battery | Integrated 5.254Whr | Integrated 4.51Whr | Removable 5.661 Whr | Removable 5.5Whr | Removable 5.698 Whr |
There are only three fixed touch buttons on the phone: home, menu and back. Their icons don’t rotate, making it clear that Dell sees the Streak as primarily a landscape device. There’s no optical or physical trackball on the phone, similar to the EVO 4G. You get left and right arrow keys on the virtual keyboard to help you navigate text boxes with granularity.
Along the bottom of the device is a 30-pin connector for power and USB. It’s like an iPhone dock connector but thicker.
The screen flows into the surrounding border in a manner I can only describe as being Apple-like. Dell really did its homework in the design of the Streak and the result is a good looking device. At the risk of sounding like somebody’s grandpa, Dell should’ve called it the Sleek instead. Har har.
Getting access to the battery, SIM card and microSD slot is the best out of any Android phone I’ve used. There’s a thin back panel that you have to slide up to remove. Sliding the panel up requires enough effort to feel secure, but not enough effort to make it frustrating. This is the porridge you’ve been looking for Goldilocks.
As long as you take care to line up all of the little latches before sliding the panel back on, you’ll maintain the same secure feel you got when you first opened the device. No rattles, no squeaks, nothing - the Streak is solid.
There are a pair of gold contacts that touch the metal battery cover. Remove the cover and the Streak turns off.
The battery has a faux aluminum finish on one side but is otherwise a pretty standard lithium ion battery. You’ll need to remove it to gain access to the SIM card and microSD slots.
The Streak ships with a 16GB microSD card installed. The microSD slot isn’t spring loaded so you just push the card in or pull it out to remove. You also have 2GB of flash on board for apps, bringing the total out of box storage to 18GB.
Dell bundles the streak with a 30-pin Dell dock to USB cable, as well as an AC power adaptor that you plug the USB cable into. The power adapter has foldable prongs but it’s slightly long.
The phone is only available through Dell's website for $549.99 or $299.99 with a 2-year contract from AT&T. Update: Dell just confirmed that the Streak is carrier locked to AT&T in the US regardless of whether or not you sign a contract.
Everything You Always Wanted to Know About SDRAM (Memory): But Were Afraid to Ask
It’s coming up on a year since we published our last memory review; possibly the longest hiatus this section of the site has ever seen. To be honest, the reason we’ve refrained from posting much of anything is because things haven’t changed all that much over the last year – barring a necessary shift towards low-voltage oriented ICs (~1.30V to ~1.50V) from the likes of Elpida and PSC. Parts of these types will eventually become the norm as memory controllers based on smaller and smaller process technology, like Intel’s 32nm Gulftown, gain traction in the market.
While voltage requirements have changed for the better, factors relating to important memory timings like CL and tRCD haven’t seen an improvement; we’re almost at the same point we were a year ago. Back then Elpida provided a glimpse of promise with their Hyper-series of ICs. The Hyper part was capable of high-speed, low-latency operation in tandem. Unfortunately, due to problems with long-term reliability, Hyper is now defunct. Corsair and perhaps Mushkin still have enough stock to sell for a while, but once it's gone, that’s it.
Corsair Dominator GTs based on Elpida Hyper - they're being phased out for something slower...
The superseding Elpida BBSE variant ICs and a range of chips from PSC now dominate the memory scene, ranging from mainstream DDR3-1333 speeds all the way to insanely-rated premium DDR3-2500 kits. Some of these parts are capable of keeping up with Hyper when it comes to CL, but do so by adding a few nanoseconds of random access latency due to a looser tRCD. Given that read and write access operations make up a significant portion of memory power consumption, this step backwards in performance may be a requisite factor for reliability – perhaps something was found by Elpida during the production lifetime of Hyper ICs that prompted a re-examination, leading to a more conservative recipe for data transfer/retrieval.
Today’s memory section comeback was fuelled by the arrival of a number of mainstream memory kits at our test labs – many of the kits we were using for motherboard reviews are no longer for sale so we needed to update our inventory of modules anyway. Corsair, Crucial and GSkill kindly sent memory from their mainstream line-ups. The original intent was to look at a few of those kits.
However, during the course of testing these kits, our focus shifted from writing a memory review (showing the same old boring graphs) to compiling something far more meaningful: a guide to memory optimization and addressing, including a detailed look at important memory timings, and an accounting of some of Intel’s lesser-known memory controller features. As such, this article should make a very compelling read for those of you interested in learning more about some of the design and engineering that goes into making memory work, and how a little understanding can go a long way when looking for creative ways to improve memory performance…
While voltage requirements have changed for the better, factors relating to important memory timings like CL and tRCD haven’t seen an improvement; we’re almost at the same point we were a year ago. Back then Elpida provided a glimpse of promise with their Hyper-series of ICs. The Hyper part was capable of high-speed, low-latency operation in tandem. Unfortunately, due to problems with long-term reliability, Hyper is now defunct. Corsair and perhaps Mushkin still have enough stock to sell for a while, but once it's gone, that’s it.
Corsair Dominator GTs based on Elpida Hyper - they're being phased out for something slower...
The superseding Elpida BBSE variant ICs and a range of chips from PSC now dominate the memory scene, ranging from mainstream DDR3-1333 speeds all the way to insanely-rated premium DDR3-2500 kits. Some of these parts are capable of keeping up with Hyper when it comes to CL, but do so by adding a few nanoseconds of random access latency due to a looser tRCD. Given that read and write access operations make up a significant portion of memory power consumption, this step backwards in performance may be a requisite factor for reliability – perhaps something was found by Elpida during the production lifetime of Hyper ICs that prompted a re-examination, leading to a more conservative recipe for data transfer/retrieval.
Today’s memory section comeback was fuelled by the arrival of a number of mainstream memory kits at our test labs – many of the kits we were using for motherboard reviews are no longer for sale so we needed to update our inventory of modules anyway. Corsair, Crucial and GSkill kindly sent memory from their mainstream line-ups. The original intent was to look at a few of those kits.
However, during the course of testing these kits, our focus shifted from writing a memory review (showing the same old boring graphs) to compiling something far more meaningful: a guide to memory optimization and addressing, including a detailed look at important memory timings, and an accounting of some of Intel’s lesser-known memory controller features. As such, this article should make a very compelling read for those of you interested in learning more about some of the design and engineering that goes into making memory work, and how a little understanding can go a long way when looking for creative ways to improve memory performance…
Subscribe to:
Posts (Atom)