Can I extend the life of my i5 4690k system?

davidzvi

Super Moderator
Joined
Aug 12, 2012
Messages
4,307
Location
Outside Boston MA
Real Name
David
A few things, take from them what you will.

MS is requiring an 8th gen or later i# cpu. So and i5-4### or i7-5### doesn't matter, no windows 11.

There is a notable difference between the HD4600 graphics and even something like the GT1030. But then so does the integrated graphics in the i7-5775c. But the bigger question is are either enough for the application to off load the work from the CPU to the GPU? Sadly not a question I know the answer to.

1628023951079.png


A CPU cooler is really a quick and easy help. The Cooler Master 212 or ARCTIC Freezer 34 eSports DUO would help. Even something as small as ARCTIC Freezer 7 X CO would help if you're just running a stock cooler with an overclock and that's only about $25.

I'm not sure an NVMe would be worth it. No you are not just limited to a SATA version, but that MoBo only supports PCIE 2. So how much really?
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
A few things, take from them what you will.

MS is requiring an 8th gen or later i# cpu. So and i5-4### or i7-5### doesn't matter, no windows 11.

There is a notable difference between the HD4600 graphics and even something like the GT1030. But then so does the integrated graphics in the i7-5775c. But the bigger question is are either enough for the application to off load the work from the CPU to the GPU? Sadly not a question I know the answer to.

View attachment 901704


A CPU cooler is really a quick and easy help. The Cooler Master 212 or ARCTIC Freezer 34 eSports DUO would help. Even something as small as ARCTIC Freezer 7 X CO would help if you're just running a stock cooler with an overclock and that's only about $25.

I'm not sure an NVMe would be worth it. No you are not just limited to a SATA version, but that MoBo only supports PCIE 2. So how much really?
That really is the question whether it would move the needle. From @Darmok N Jalad experience with newer integrated graphics than mine, I doubt the Iris 6200 on a 5th gen wouldn't offload the cpu.

I do have a $25 ID-COOLING SE-224-XT coming tomorrow. I got it because it had good reviews and was $15 cheaper than the Hyper 212.
 

davidzvi

Super Moderator
Joined
Aug 12, 2012
Messages
4,307
Location
Outside Boston MA
Real Name
David
Not a bad cooler from a quick review I just saw. Fan is a little on the loader side, but otherwise good. It also looks like you might be able to mount the fan in a puller setup instead of a pusher if your RAM is in the way.
 

Darmok N Jalad

Temba, his aperture wide
Joined
Sep 6, 2019
Messages
1,998
Location
Tanagra (not really)
Thanks for that feedback. That's really impressive time reduction. If you don't mind me asking, how much were you able to snag a good one for? eBay is looking somewhat like auction starting at ~$100-$150 and some buy it now higher. The shader counts and memory are much higher for the money compared to spending $100-130 on the bottom rung GT 1030. But as @jdcope mentioned about old drivers, it looks like this might not be supported with updates any longer? But it could work as a stop gap.

View attachment 901672
That means new driver support is ending. Existing drivers will still work and should be available for a long time. Part of the reason driver support expires for older hardware is because the new hardware is so different architecturally that the new drivers are aimed at optimizing for what is current, not what has been out for several years now. It gets complicated to build a driver to support that many generations of cards. Typically, software more often requires a certain OS, but less so with drivers. Gaming is where driver support matters most, since new games often require patches and tweaks at the driver level to provide ideal performance, or fix graphical errors.

As for price, it wasn't exactly cheap. $175. Though I guess I lucked out, as the seller had a few more and just marked them up to $200.
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
That means new driver support is ending. Existing drivers will still work and should be available for a long time. Part of the reason driver support expires for older hardware is because the new hardware is so different architecturally that the new drivers are aimed at optimizing for what is current, not what has been out for several years now. It gets complicated to build a driver to support that many generations of cards. Typically, software more often requires a certain OS, but less so with drivers. Gaming is where driver support matters most, since new games often require patches and tweaks at the driver level to provide ideal performance, or fix graphical errors.

As for price, it wasn't exactly cheap. $175. Though I guess I lucked out, as the seller had a few more and just marked them up to $200.
Gotcha. Makes sense.

That seems like a better performance per dollar deal for Passmark computations per second than the GT 1030. I'm assuming that something like that benchmark would be more relevant than 3D Mark scores?

I'm learning so much in this thread and I appreciate everyone who has contributed!
 

Darmok N Jalad

Temba, his aperture wide
Joined
Sep 6, 2019
Messages
1,998
Location
Tanagra (not really)
When checking reviews, look at the compute benchmarks primarily. Game scores might be some indication, but some cards compute better than they game, relative to others. Look for high memory bandwidth, since compute depends on fast memory to keep the operations flowing. I bet checking luxmark scores would be a good guide.
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
A couple of updates from my side.
1. The new cooler is really effective. I'm seeing 12-15C drop under the sustained high loads. Pretty happy with that answer I feel safer lol.

2. A friend of mine rummaged through his old computer hardware and found a radeon HD 6590 1GB that he just gave me. It was actually above min spec except for vram on topaz. I wasn't sure what to expect or even if if it would work as GPU acceleration. Well I'm happy to report about a 50 percent improvement in rendering and preview time from the first few images I tried. So it's a great improvement for free and I know that a better card would be an even better improvement. But I can wait out the craziness for now.

The trial of pure raw didn't show any improvement and it flags the card as only partially compatible. Since I own topaz that's my point of software speed improvement.
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
Are you selecting that card in PureRAW preferences and then restarting? Not sure if it won't use it at all, or it just won't work as well. Either way, glad to hear you've made some gains with minimal expense.
Yes, I selected it and restarted it. Tried it again today as well and no matter what's selected it will only use my cpu, and I'm not really surprised at all.

Also, I think all the overclocking testing should include the deep prime CPU only process as a stress test. With the new cooler i've followed a few overclocking suggestions and ran a few of the tests that are recommended. One guide from Tom's hardware was suggesting running a small test version of prime95 as a thermal stress test. I did that at increased overclock and had reasonable temperatures ~82C with an overclock up to 4.6 GHz. Now run pure raw and that jumps back up into mid 90's .... YIKES. So for me to use pure raw I'd have to back off my overclock which wouldn't be a big deal.

At this point I'm unlikely to buy pure raw. If I was to use any dxo software it would probably go all the way up to PL4 and have the slider options for sharpness. I do like how it handles my 40-150 + MC-20 files.
 

Darmok N Jalad

Temba, his aperture wide
Joined
Sep 6, 2019
Messages
1,998
Location
Tanagra (not really)
Yes, I selected it and restarted it. Tried it again today as well and no matter what's selected it will only use my cpu, and I'm not really surprised at all.

Also, I think all the overclocking testing should include the deep prime CPU only process as a stress test. With the new cooler i've followed a few overclocking suggestions and ran a few of the tests that are recommended. One guide from Tom's hardware was suggesting running a small test version of prime95 as a thermal stress test. I did that at increased overclock and had reasonable temperatures ~82C with an overclock up to 4.6 GHz. Now run pure raw and that jumps back up into mid 90's .... YIKES. So for me to use pure raw I'd have to back off my overclock which wouldn't be a big deal.

At this point I'm unlikely to buy pure raw. If I was to use any dxo software it would probably go all the way up to PL4 and have the slider options for sharpness. I do like how it handles my 40-150 + MC-20 files.
Yeah, I already have a workflow, so adding PureRAW to the process is line and simple for me. I guess the know there are some of us out there that can't handle the disruption of an entirely new DAM product, or at least would be less willing to try it otherwise. Hopefully they'll add more options for processing in future updates. It's still pretty new, so a few simple sliders controlling the process would certainly be welcome.

I still can't believe how well the M1 fairs on these image editors. I just discovered Affinity Photo has a benchmark built into it now. The M1 just edges out my old classic Mac Pro with R9 380. Yes, it's much newer, but it's doing so with maybe 5% of the power budget.
 

BosseBe

Mu-43 Hall of Famer
Joined
Aug 7, 2015
Messages
4,314
Location
Stockholm, Sweden
Real Name
Bo
A suggestion for all of you wanting to get a newer GPU is to talk to your younger gaming friends, they might have cards lying about that they no longer use.
I think gamers are prone to update often so they might have older cards that they haven't gotten rid of yet.

I was lucky when I built my new PC before Christmas, I had a GPU on order and they said it was coming in a couple of days, I was impatient and decided to cancel the order and order from a shop that had it in stock, I think I got the last one in their stock! That was a Radeon RX 5500 XT 8GB and it cost about US$ 280. Now it looks like 50% more, but you can't get it!

I use a Noctua NH-D15 as my CPU cooler, it is totally overkill! If I am running Prime95 to stress test the CPU it comes to about 65 degrees and is stable at 100% CPU even after half an hour. (Sure there is noise from all the fans in the chassis then, but it is running cool!)
So having a good CPU cooler is a must!
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
Yeah, I already have a workflow, so adding PureRAW to the process is line and simple for me. I guess the know there are some of us out there that can't handle the disruption of an entirely new DAM product, or at least would be less willing to try it otherwise. Hopefully they'll add more options for processing in future updates. It's still pretty new, so a few simple sliders controlling the process would certainly be welcome.

I still can't believe how well the M1 fairs on these image editors. I just discovered Affinity Photo has a benchmark built into it now. The M1 just edges out my old classic Mac Pro with R9 380. Yes, it's much newer, but it's doing so with maybe 5% of the power budget.
To my surprise I found it easier to use PL4 integrated with my lightroom work flow than pure raw. It was easy enough to make a preset and just apply lens corrections and deep prime. The plug in for PL4 in LR worked well to send it out and bring it back in. Pure raw made me find the file in the file explorer and add drag it to the program. Then had to close lightroom for it to effectively export back, which actually just pops up the import window for LR. I figure they'll make some improvements over the next version.

As for the new technology. That M1 sounds impressive. I've been feeling a bit bad about how much power my system is drawing compared to what something more modern would do for the same processing power
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
A suggestion for all of you wanting to get a newer GPU is to talk to your younger gaming friends, they might have cards lying about that they no longer use.
I think gamers are prone to update often so they might have older cards that they haven't gotten rid of yet.

I was lucky when I built my new PC before Christmas, I had a GPU on order and they said it was coming in a couple of days, I was impatient and decided to cancel the order and order from a shop that had it in stock, I think I got the last one in their stock! That was a Radeon RX 5500 XT 8GB and it cost about US$ 280. Now it looks like 50% more, but you can't get it!

I use a Noctua NH-D15 as my CPU cooler, it is totally overkill! If I am running Prime95 to stress test the CPU it comes to about 65 degrees and is stable at 100% CPU even after half an hour. (Sure there is noise from all the fans in the chassis then, but it is running cool!)
So having a good CPU cooler is a must!
The only gamer friends I have already cashed in!

If I drop my OC to 4.3 or 4.4 the temps are in the mid 60s. I may do that again but I wanted to get every last drop 😁 (or see what the new cooler allowed. I'm fairly happy since the stock speed is 3.5GHz
 

Darmok N Jalad

Temba, his aperture wide
Joined
Sep 6, 2019
Messages
1,998
Location
Tanagra (not really)
To my surprise I found it easier to use PL4 integrated with my lightroom work flow than pure raw. It was easy enough to make a preset and just apply lens corrections and deep prime. The plug in for PL4 in LR worked well to send it out and bring it back in. Pure raw made me find the file in the file explorer and add drag it to the program. Then had to close lightroom for it to effectively export back, which actually just pops up the import window for LR. I figure they'll make some improvements over the next version.

As for the new technology. That M1 sounds impressive. I've been feeling a bit bad about how much power my system is drawing compared to what something more modern would do for the same processing power
Ah, most of the time your system is sipping power. I say keep using it until it doesn’t serve you well. That’s probably still better on the environment then the costs of building and shipping a new machine to your doorstep. If your machine is asleep most of the day, it’s barely using any power at all.
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
Well as an update I did a thing and bought new motherboard, CPU, and ram. I just wanted to get something and move on. Taking some of the advice here based on going for the i7 over i5, I went with a Ryzen 7 5700G over a Ryzen 5 5600G. That gives me 8 cores and 16 threads with the Vega 8 integrated graphics. I got the MSI B550A Pro MB and Crucial Ballistix DDR4 3600 CL16 ram to go along. I figured that I'd take a chance on this particular integrated graphics after seeing the results posted from prior models on the DXO deepprime GPU acceleration spreadsheet on their forum. I wasn't sure how it would translate to Topaz Denoise improvement, but generally was hoping for 2x improvement. I figured if it was close to that it would hold me until some worthwhile graphics card could be bought, while getting the new processor benefits short term.

I'll just say that I'm thoroughly impressed. Here is my Topaz Denoise results with 20MP files:

Old system i5 4690k overclocked from 3.5 to 4.6GHz 4 cores no HT. Radeon HD 6950 1GB card
CPU only - ~54 sec
GPU accel - ~43 sec

New system - Ryzen 7 5700G PBO enabled and XMP on memory, GPU overclocked to 2.3 GHz (300MHz over stock), Infinity fabric up to 1800MHz
CPU only - ~ 31 sec
GPU accel - ~11-12 sec (prior to OC it was 13sec)

That's a whopping 3.5x+ improvement from the old dedicated card! I can even leave on the auto update previews now and not get angry. It's a much better result than I expected and am really in no rush to get a dedicated card now. If the market takes 1-2 years to come back to earth I can get one at that point.

Unfortunately my trial for DXO ran out and I can't compare what the times were for that.

In summary to my whole thread, my 4690k has ended it's 6 year run with me.
 

doady

Mu-43 Top Veteran
Joined
May 18, 2020
Messages
595
Location
Canada
I will be building a similar system - 5600G and B550 - probably within a year, so nice to hear of good performance with photo editing software. Of course I will be coming from Phenom II 945 X4, which is much weaker than i5 4690K, but my Radeon 7850 is very similar to 6950 in terms of performance.

I am very surprised by how much extra speed was gained by using the integrated GPU. 20 second gain compared to 11 second gain from 6950? Really? From what I've seen previously, the integrated GPU should have similar raw performance to 6950 or 7850, which is still impressive considering the fact that it's integrated and has maybe 1/6 or 1/7 of the power consumption compared to 6950 and 7850.

Maybe it's the fast 3600MHz RAM that makes the difference. I will get 3200MHz RAM and most tests I've seen are with 3200MHz. RAM is the main thing holding integrated GPUs back now, but with DDR5 coming, dedicated GPUs could become a thing of the past.
 

Darmok N Jalad

Temba, his aperture wide
Joined
Sep 6, 2019
Messages
1,998
Location
Tanagra (not really)
Well as an update I did a thing and bought new motherboard, CPU, and ram. I just wanted to get something and move on. Taking some of the advice here based on going for the i7 over i5, I went with a Ryzen 7 5700G over a Ryzen 5 5600G. That gives me 8 cores and 16 threads with the Vega 8 integrated graphics. I got the MSI B550A Pro MB and Crucial Ballistix DDR4 3600 CL16 ram to go along. I figured that I'd take a chance on this particular integrated graphics after seeing the results posted from prior models on the DXO deepprime GPU acceleration spreadsheet on their forum. I wasn't sure how it would translate to Topaz Denoise improvement, but generally was hoping for 2x improvement. I figured if it was close to that it would hold me until some worthwhile graphics card could be bought, while getting the new processor benefits short term.

I'll just say that I'm thoroughly impressed. Here is my Topaz Denoise results with 20MP files:

Old system i5 4690k overclocked from 3.5 to 4.6GHz 4 cores no HT. Radeon HD 6950 1GB card
CPU only - ~54 sec
GPU accel - ~43 sec

New system - Ryzen 7 5700G PBO enabled and XMP on memory, GPU overclocked to 2.3 GHz (300MHz over stock), Infinity fabric up to 1800MHz
CPU only - ~ 31 sec
GPU accel - ~11-12 sec (prior to OC it was 13sec)

That's a whopping 3.5x+ improvement from the old dedicated card! I can even leave on the auto update previews now and not get angry. It's a much better result than I expected and am really in no rush to get a dedicated card now. If the market takes 1-2 years to come back to earth I can get one at that point.

Unfortunately my trial for DXO ran out and I can't compare what the times were for that.

In summary to my whole thread, my 4690k has ended it's 6 year run with me.
I’m thinking of a similar build down the road. Having gone entirely to Linux, I’m not using the denoising software anymore. I might just do the 5600G, but I’m not in any rush at the moment. I could probably sell what I have and fund the upgrade though, considering that even an old R9 380 might bring $150-200 on the e of Bays.

That’s some impressive gains, for sure. What the integrated Vega lacks in CUs, it makes up for in clockspeed.
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
I am very surprised by how much extra speed was gained by using the integrated GPU. 20 second gain compared to 11 second gain from 6950? Really? From what I've seen previously, the integrated GPU should have similar raw performance to 6950 or 7850,
I'm a little bit confused here. Are you trying to assess the improvement of the graphics solutions I had compared only to the cpu that I had it paired with at the time? If so, that's not the best way to look at it since Topaz treats CPU or GPU as stand alone processing options. So when you select CPU to process, it doesn't engage the GPU at all for the processing. Similar for GPU being selected, it only uses the GPU to process and CPU is basically idle. So if I installed the 6950 into my new system and selected it to be the processing option I would expect to get the same processing time as when installed with the old system.

What this means is that assessing the GPU improvement from 6950 to Vega 8 in the 5700G can be directly compared as a ~31 -32s (3.5x+) improvement for Topaz Denoise AI v3.2.

I too was surprised at how much it improved. Looking at the FP32 values they are essentially equal. Since that's a theoretical calculation, I'm betting that there's some generation on generation efficiency in the engine that's not accounted for with aligning the order of calculations to process or something. When I asked Topaz directly what they thought if there would be an improvement even though FP32 didn't show much difference, they suggested the Vega 8 would be much faster. They didn't give me another metric to try and compare with though. Alternatively, it could be in the different compute models that they have along with what versions of things like Open GL, Open CL, OpenVino, Direct X, etc. that the graphics or CPU support.

From Wikipedia about the graphics core in Vega it could be as simple as allowing computations with less precision than FP32:

Graphics Core Next 5

AMD began releasing details of their next generation of GCN Architecture, termed the 'Next-Generation Compute Unit', in January 2017.[36][40][41] The new design was expected to increase instructions per clock, higher clock speeds, support for HBM2, a larger memory address space. The discrete graphics chipsets also include "HBCC (High Bandwidth Cache Controller)", but not when integrated into APUs.[42] Additionally, the new chips were expected to include improvements in the Rasterisation and Render output units. The stream processors are heavily modified from the previous generations to support packed math Rapid Pack Math technology for 8-bit, 16-bit, and 32-bit numbers. With this there is a significant performance advantage when lower precision is acceptable (for example: processing two half-precision numbers at the same rate as a single single precision number).

Precision Performance

Double-precision floating-point (FP64) performance of all GCN 5th generation GPUs, except for Vega 20, is 1/16 of FP32 performance. For Vega 20 this is 1/2 of FP32 performance.[54] All GCN 5th generation GPUs support half-precision floating-point (FP16) calculations which is double of FP32 performance.

Also, I don't know that this level of improvement is expected for other programs like DXO using deepprime or the On1 NoNoise. The 6950 couldn't be used at all with DXO programs, so I can't make a GPU comparision. They don't spell out any specific requirement like Topaz did for Open GL 3.3+. The results the users post seems to show deepprime acceleration that is roughly linear with FP32 performance. That is until you get to the latest ray tracing versions. There is one result on there with a Ryzen 7 4750Pro that has the same Vega 8. It shows that it calculates at 1.1 Mp/sec, so the 20MP file would be roughly 18 sec.

Maybe it's the fast 3600MHz RAM that makes the difference. I will get 3200MHz RAM and most tests I've seen are with 3200MHz. RAM is the main thing holding integrated GPUs back now, but with DDR5 coming, dedicated GPUs could become a thing of the past.
It's not the ram that's making the bulk difference. When I ran the stock 5700G, stock Vega8 sped, and stock DDR4 speed (2400MHz) the compute times on the Vega 8 were only 1.5-2 seconds slower than what I showed. Higher ram speed and higher infinity fabric speed are definitely something that can improve performance, but it's only in the margins. That HD6950 has GDDR5 onboard.
 

Darmok N Jalad

Temba, his aperture wide
Joined
Sep 6, 2019
Messages
1,998
Location
Tanagra (not really)
Denoising doesn’t use GPU only, but a combination of GPU and CPU—there’s a component of the task that the CPU handles before passing it off to the GPU. The IPC of the 5700G is considerably better than the 4690K, plus, it has 2x the cores and 4x the threads. Plus, you’re running much faster memory with quite a bit more bandwidth. You’d know for sure how the IGP compares to the GPU if you dropped the 6950 in your new rig and ran the tests again.
 
Joined
Nov 2, 2013
Messages
444
Location
Western North Carolina
Denoising doesn’t use GPU only, but a combination of GPU and CPU—there’s a component of the task that the CPU handles before passing it off to the GPU. The IPC of the 5700G is considerably better than the 4690K, plus, it has 2x the cores and 4x the threads. Plus, you’re running much faster memory with quite a bit more bandwidth. You’d know for sure how the IGP compares to the GPU if you dropped the 6950 in your new rig and ran the tests again.
There is a component, that's why I said CPU is basically idle. For Topaz only (all I know) the yellow section is the CPU usage from clicking apply until it switches apps to pull back into Lightroom (blue) and renders it's preview in LR. During the Topaz compute CPU is less than 10%. It was less than 20% on the old CPU during this portion, so not indicating a bottleneck to me. So I have a hard time believing that CPU is a major/bulk of contribution to the time when using GPU to do the compute. Maybe it contributes to a couple of seconds of the reduction. Yes, I could be thorough and put the old graphics card in, but it's not worth the hassle.
1631121325331.png
Subscribe to see EXIF info for this image (if available)
 

Latest threads

Links on this page may be to our affiliates. Sales through affiliate links may benefit this site.
Mu-43 is a fan site and not associated with Olympus, Panasonic, or other manufacturers mentioned on this site.
Forum post reactions by Twemoji: https://github.com/twitter/twemoji
Forum GIFs powered by GIPHY: https://giphy.com/
Copyright © Amin Forums, LLC
Top Bottom