Convincing myself to skip Haswell for Skylake (&DDR4)

Glocky

Drinking your tears
So... yeah.

I had plans to do up a cool Haswell rig this spring / summer 2013. $2500 kind of cool... updating my 4 (then 5) year old rig. But then today, I was messing around with my overclock, and managed to tweak a few voltages here and there, and it's running cooler, which may give me some chance to up it in the future.

So what do you do when you need to reboot / fiddle with bios to mess with your overclock? You read shit on your smartphone or laptop or whatever.

SO... here's some things I figured out today!

Since I picked quality parts when I built this thing 4yrs ago...
- my mobo (P5Q Pro) has PCI-e 2.0 x16 ... which means I can put a PCI-e 3.0 card in it and not bottleneck the bandwidth
- my PSU is 750W, 60A 12v rail... that with the new GPUs using less energy, even with capacitor aging, I can easily put a new GPU in here (say a 680 -- as a 690 will saturate PCI-e 2.0 bandwidth)
- no point in putting in a 680 w/o a 120hz monitor

120hz 1080p monitor + a 680 is still cheaper than a new rig (especially post-Christmas), and I should easily get 2 more years out of it, which would get me to Skylake and DDR4.

Am I crazy?
 

47

TD Admin, Chicken Licker, Top Shelf Sleeper
yeah man, delay building new rig for as long as possible.

let me leech some wisdom. what can i do with this http://www.evga.com/articles/00481/ . i have the cpu oc'd, gpu starts to sound like a f-16 if i oc. memory ocing is a mystery to me. cpu-z says DRAM freq. 758mhz. NB freq= 3032mhz. i have no idea what im doing. how i oc'd my cpu = seached the internets for what settings ppl used to get 4ghz, then adjusted voltage a bit.

also, whats the best gpu i can jam in there? 5850 is becoming dated.
:flerp:
 

Glocky

Drinking your tears
I'm not super familiar with the x58 platform, but the basics are similar, except FSB changed to QPI and RAM and CPU running in 1:1 didn't matter any more (never really did, just that memory dividers made earlier overclocking trickier). If the CPU in your sig is your current OC, then it's pretty good. If you haven't disabled turbo, it will go a little faster at times, jumping your multi from 21x to 22x (or 4.17Ghz). Over 4Ghz means big current and big cooling (and not needed for gaming).

Overclocking your GPU doesn't have to make it sound like a jet... you need to set a manual fan profile that keeps the temps appropriate, but yeah your 5850 and my 6870 have similar results in gaming, though I think I have had more overclocking success. From your link, your board is PCI-e 2.0 x16, so as long as you have a good PSU (watts, amps and reliability), you also can jump up to a GTX 680 or HD 7970. Any double card like a GTX 690 is pissing money away as we don't have the bandwidth on the 2.0

As to the memory, what is the exact type? DDR3 running 758mhz is DDR2 speed, and the 8-8-8-22 is a very loose set of timings. *edit* x2 = 1516MHz, you may already be overclocking DDR3-1333 or slightly underclocking DDR3-1600. The difference is minimal. *edit*
Screenies of both the SPD and Memory tabs in CPU-Z will help. You may just be able to select a faster/tighter XMP profile as RAM usually boots to the slowest/loosest profile to ensure it does boot.
 

zackychuu

TD Admin / Wanker
Glocky This is what shows up in Speccy for my RAM 8.00 GB Dual-Channel DDR3 @ 666MHz (9-9-9-24)
I'm guessing by what you said up there^ "DDR3 running 758mhz is DDR2 speed" This is a bad thing?
 

Glocky

Drinking your tears
Probably not.

Looking at both your and @47's settings, it may just be the way it's reported. It likely really is 2x that (DDR = double data rate)
So your 666 is really 666 2/3 *2 = 1333 (fine)
And 47's is really 758 * 2 = 1516 (fine too)

So for the timings, they are probably OK too. I am so used to DDR2 settings, I automatically know my 450 FSB means my ram is running 900Mhz because my bus is 450. I'll edit the above so I don't sow any more confusion.
 

47

TD Admin, Chicken Licker, Top Shelf Sleeper
the x58 is triple channel ram. using corsair dominator 6x2gb

j9lyme.jpg
 

Glocky

Drinking your tears
triple channel, but still double data rate. So your bus x2 = Your true Mhz (or more completely, your ram is double date rate third edition, triple channel synchronization but only the double data rate applies for the true MhZ 2x here)
And since it's 1600 CL 8 you're slightly underclocking.
If you load the XMP-1600 profile (this link may help if you want to try), you'll be running at it's fastest stock frequency and timings. ie. 1600 8-8-8-24
That said, the difference between that and what you're running for gaming... probably not noticeable. For e-peen / benchmarking... a bit.
 

47

TD Admin, Chicken Licker, Top Shelf Sleeper
so i use these settings? my cpu is running 190 x 21

1600 MHz memory frequency - overlocked core frequency

Frequency / Voltage Control
CPU Host Frequency ( 200 )
CPU Clock Ratio ( 20x )
CPU Uncore Frequency ( 16x )

Memory Feature
Memory Frequency ( 2:8 )

Voltage Control
CPU VTT ( +50mv )
DIMM Voltage ( 1.65v )

Memory Frequency: 200 * 8 (2:8) = 1600 MHz
Uncore Frequency: 200 * 16 = 3200 MHz
Core Frequency: 200 * 20 = 4000 MHz



also, someone once told me that its also good to lower the freq and tighten up the timings instead. that make sense ?
 

Glocky

Drinking your tears
so i use these settings? my cpu is running 190 x 21

1600 MHz memory frequency - overlocked core frequency

Frequency / Voltage Control
CPU Host Frequency ( 200 )
CPU Clock Ratio ( 20x )
CPU Uncore Frequency ( 16x )

Memory Feature
Memory Frequency ( 2:8 )

Voltage Control
CPU VTT ( +50mv )
DIMM Voltage ( 1.65v )

Memory Frequency: 200 * 8 (2:8) = 1600 MHz
Uncore Frequency: 200 * 16 = 3200 MHz
Core Frequency: 200 * 20 = 4000 MHz



also, someone once told me that its also good to lower the freq and tighten up the timings instead. that make sense ?

I wouldn't take the post verbatim, but something similar should get you that last little bit of performance. It sounds good though, same CPU overclock.

Timings vs frequency is a huge debate that is ongoing. Obviously higher freq. + tight timings is the best... next to that... it's whatever is stable.
My ram for example, is DDR2-800 4-4-4-12, but to get my CPU overclock, the ram needs to run at DDR2-900, which isn't stable at CAS4, so I run it at 5-5-5-15. Better quality RAM can run tighter timings at the same frequencies as cheaper RAM.

Again, gaming wise, unlikely you'll notice the difference in anything but the most demanding games (so nothing CS lol)... but it's fun getting the most out of the parts you paid money for :) Save / write down your current working config before you play!
 

OG buckshot jr

TD Admin
Am I crazy?
Only for thinking a PCI-E 3.0 card will run as good as it should on a PCI-E 2.0 slot....

But Really, it's up to you. It all depends on what you're actually running (game and application-wise). If you're satisfied, then don't buy into the console-style release of new equipment. I'm still on my Q6600, running smoothly and cool @ 3.4ghz, and it doesn't slow down. I could use a GPU upgrade, but that'll come with the Holiday season approaching in a month or two...
 

Glocky

Drinking your tears
Only for thinking a PCI-E 3.0 card will run as good as it should on a PCI-E 2.0 slot....
bf3_1680_1050.gif

http://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/6.html
Just to pick one site, but they are all pretty consistent... the current PCI-e 3.0 cards do not saturate the 3.0 x16 bandwidth, let alone PCI-e 2.0 x16 (+/- 1%) or even 2 cards in PCI-e 2.0 x8 setup (+/- 2%-4%).

So with my PCI-e 2.0 x16 at my current resolution I would get 87.5 avg FPS and 87.7 avg FPS with PCI-E 3.0 x16... performance is within margin of error, so I am not crazy. It's +/- 1%. The difference may be even less with my CPU.

But Really, it's up to you. It all depends on what you're actually running (game and application-wise). If you're satisfied, then don't buy into the console-style release of new equipment. I'm still on my Q6600, running smoothly and cool @ 3.4ghz, and it doesn't slow down. I could use a GPU upgrade, but that'll come with the Holiday season approaching in a month or two...
CPU and the rest, I am content, as I have indicated, but I'd like a smoother experience. (Higher avg / more consistent fps).
I'll also be running more efficiently, less power consumption on my overclocked GPU, and better AA and other smoothing like adaptive v-sync.
 

OG buckshot jr

TD Admin
I'm stunned at those results, I'd be curious at what test system they're using, and if that holds a bias towards the results. I know from real-world experience (cock of fur for example), difference from even pci-e 1.1 x4 to pci-e 1.1 x8 would yeild high gains... What these cats are testing, Lord knows.
 

Glocky

Drinking your tears
Test System
Processor: Intel Core i7-3770K @ 4.7 GHz
(Ivy Bridge, 8192 KB Cache)
Motherboard: ASUS Maximus V Gene
Intel Z77
Memory: 2x 4096 MB Corsair Vengeance PC3-12800 DDR3
@ 1600 MHz 9-9-9-24
Harddisk: WD Caviar Blue WD5000AAKS 500 GB
Power Supply: Antec HCP-1200 1200W
Software: Windows 7 64-bit Service Pack 1
Drivers: GTX 680: 301.24
ATI: Catalyst 12.3
Display: LG Flatron W3000H 30" 2560x1600
3x Hanns.G HL225DBB 21.5" 1920x1080
Benchmark scores in other reviews are only comparable when this exact same configuration is used.
  • All video card results were obtained on this exact system with the exact same configuration.
  • All games were set to their highest quality setting unless indicated otherwise.
  • AA and AF are applied via in-game settings, not via the driver's control panel.
Each benchmark was tested at the following settings and resolution:
  • 1280 x 800, 2x Anti-aliasing. Common resolution for most smaller flatscreens today (17" - 19"). A bit of eye candy turned on in the drivers.
  • 1680 x 1050, 4x Anti-aliasing. Most common widescreen resolution on larger displays (19" - 22"). Very good looking driver graphics settings.
  • 1920 x 1200, 4x Anti-aliasing. Typical widescreen resolution for large displays (22" - 26"). Very good looking driver graphics settings.
  • 2560 x 1600, 4x Anti-aliasing. Highest possible resolution for commonly available displays (30"). Very good looking driver graphics settings.
  • 5760 x 1080, 4x Anti-aliasing. Typical high-end gaming multi-monitor resolution. Very good looking driver graphics settings.
 

Cock

Cockilicious
Staff member
What BJ is talking about - I went from a P45 Mobo to a G33 Mobo (not by choice) and had a 100+ FPS drop, same CPU same GPU same RAM.
Only real difference i could find between them was PCI-E 1.1 vs PCI-E 2.0
 

Glocky

Drinking your tears
Makes sense though for you Cock, especially if you GPU was filling or almost filling the 2.0 and then was too much for the 1.1 especially if you went from 2.0 x8 or x16 to 1.1 x8
The full analysis from TechPowerUp says:

TechPowerUp said:
Conclusion

Almost 2,000 individual benchmark runs later, we have a much clearer picture of PCI-Express scaling using the latest and greatest hardware. Our results here can also be extended to other platforms that use PCI-Express for graphics card connectivity, because performance of the PCI-Express controller itself can not lead to significant differences, as long as transfer rate and number of lanes remains the the same.
  • The new PCI-Express 3.0 interface can provide around 1% performance boost for both HD 7970 and GTX 680. While this confirms that both cards provide working support for Gen 3, such a small improvement is clearly not worth worrying about. It certainly does not warrant buying a new processor or motherboard. PCI-Express is forward and backward compatible, so any PCI-Express graphics card will work in any motherboard's PCI-Express slot, no matter which version each component supports.
  • PCI-Express 1.1 x8 poses a significant performance loss, so for people clinging on to old platforms, it's time to upgrade.
  • PCI-Express configurations that promise the same performance, do deliver on it. For example, we see 1.1 x16, 2.0 x8 and 3.0 x4 with the same performance, within a 1% margin, which is not significant beyond benchmarking.
  • Our testing confirms that modern graphics cards work just fine at slower bus speed, yet performance degrades the slower the bus speed is. Everything down to x16 1.1 and its equivalents (x8 2.0, x4 3.0) provides sufficient gaming performance even with the latest graphics hardware, losing only 5% average in worst-case. Only at even lower speeds we see drastic framerate losses, which would warrant action.
  • Each game has different requirements for PCI-Express bandwidth, depending on the game engine design. Alan Wake is most dependent on a fast bus interface, losing up to 70% framerate, whereas Aliens vs. Predator handles bandwidth starvation the best, losing only 10% in worst case (1280x800 GTX 680).
  • Contrary to intuition, the driving factor for PCI-Express bus width and speed for most games is framerate, not resolution. Our benchmarks conclusively show that with higher resolution, the performance difference between PCIe configurations shrinks. This is because the bus transfers a fairly constant amount of scene and texture data - for each frame. The final rendered image never moves across the bus, except in render engines that do post-processing on the CPU, for example Alan Wake. Even in that case, the reduction in FPS from higher resolution is bigger than the increase in pixel data.
  • NVIDIA's GeForce GTX 680 suffers a relatively bigger performance hit from a slower PCI-Express interface than AMD's HD 7970. Going from x16 3.0 to x4 1.1 causes the HD 7970 to lose 14%, GTX 680 loses 27% real-life performance for the same transition. A reasonably accurate rule of thumb is that GTX 680 loses twice the percentage from slower PCI-E speeds, compared to HD 7970.
  • PCI-Express 2.0 x8 is still a viable mode for 2-way multi-GPU. This is the mode most Core "Sandy Bridge" platform users will end up using for multi-GPU, and differences between PCI-Express 2.0 and 3.0 x8 is just 4% and 2% for the GTX 680 and HD 7970, respectively.
  • PCI-Express 3.0 x4 is a revelation. Although we knew that on paper it provides bandwidth comparable to PCI-Express 2.0 x8, we were skeptical. The mode's real-world performance proves the theory, and could be a pleasant data point for users of performance and high-end Intel Z77 motherboards in the ATX form-factor, running Ivy Bridge Core processors, which have a third PCI-Express 3.0 x16 (electrical 3.0 x4) slot wired to the CPU.
All of the above considered, any dual 3.0 card like the GTX 690 is throwing money away, the 2.0 16x can't handle that bandwidth, and I am really not interested in SLI / Crossfire.

I won't sell my current card though... I'll put it back in this rig before I give it to the kids and put the GTX 680 in the new rig, if I wait less than the full 2 years. If I make it to spring/summer 2015, then the new rig will have a new GPU too.
 
Top