FORUMS

Top Earners This Week
1
klausslava88 Points: 439
2
eightbitbear Points: 430
3
SpectorTM Points: 430
4
sadov Points: 409
5
Pkbx1 Points: 404
6
emunch Points: 397
7
Dalysay Points: 395
8
Hallowed Be Thy... Points: 392
9
Mr. Pokeylope Points: 387
10
Rys33 Points: 385

New R12 - Performance Troubleshooting Tips

  • 1

Hi, wonder if anyone can offer me some troubleshooting tip for my R12?  I've had it about a month and still feeling things out.  It's a 3090 with an i9 11900kf.  Everything is still stock and no modifications or overclocking yet.

I expected a performance boost over my old custom built i5 8600k and GTX-1070ti, but I'm not seeing it in games.  Main games I play are BF2042, Fortnite and MSFS2020.  FS2020 seems to be a slight improvement by a few FPS (~40fps) but Fortnite (~110fps) and BF2042 (60-70fps) perform slightly worse than my old system (all in game settings the same).

I've run several benchmarks Heaven, 3DMark, Cinebench and they all seem to be reasonable scores for this hardware. 

I'm getting some weird results with CPU-Z stress test but not sure how to interpret it. The test starts off slightly lower (6450) than the i9 11900k reference system I choose which scores at 6522, but it then steadily decreases over a few minutes to 5781.  It hovers there for a few minutes and then drops to 0. AWCC reports idle temp and clock at about 30 deg C and 5ghz.  As soon as the stress test starts the temp immediately jumps to 67 deg C and the clock drops to 4.65ghz.  After about 30 seconds the clock and temp stabilize at 4.45ghz and 70 deg C. 

I thought thermal throttling on this processor starts at 80 so I'm not 100% sure what I'm seeing here.  Do you think this could be throttling? I can run new tests and post them here if anyone is willing to take a look and offer suggestions.  I have the following installed:

PCMark10, 3DMark, VRMark, CPU-Z, GPU-Z, HWiNfo, AWCC, Cinebench.  I can install anything else recommended if it will help.

The processor is water cooled and OS is Windows 11.  All drivers and updates are applied if that helps.

 

Thanks for any suggestions.

Replies • 16
Planetary
  • 1

Hi BOTTLECAP,

It sounds like your system is running fine.  The perfomance delta of your CPU will mainly be due to the number of cores, as the new one has 8 vices your original 6.  However, the more cores, the more heat.  The more heat, the less room for hitting the turbo frequencies.  So the biggest improvement you would see with that, has to do with multi-threaded applications, which is NOT games.

Your GPU *should* be significantly faster than the 1070 Ti, we are talking real close to what should be double the performance. The problem is, at low resolutions (1080P or lower, and with the 3090, 1440P might also be low), you become CPU bound, meaning that your system cannot feed th GPU fast enough for it to stretch its legs.  These higher-end GPUs are reallyl meant for running high resolutions, such as 4K.  You didn't mention the resolution you are running, but with that GPU you should be trying to always be running at 1440P minimum, and more likely 2160P (4K) or higher.  If your connected monitor doesn't support these higher resolutions, some games can be setup to run a higher resolution internally, then down-sample to your native res.  If you can do this, you will be able to select higher resolutions in the settings than your monitor supports.  Ideally, you should just get a new monitor to match the capabilities of your GPU.

Don't rely on synthetic benchmarks to gauge your performance, as thoses can vary (as you have noticed) greatlly.  I use 3Dmark, and have been for years.  I don't use it to compare with all of the professinal benchmarkers out there, but with my own systems.  I run a set of benches when I get a new system, and then consider those "baseline."  If I suspect a problem, or feel things are going to slow, I will run another set and compare.  This is a good way to find out you have problems quickly.

TLDR: As previoulsy mentioned, it sounds like your system is running fine.  While this is a nice system, with the exception of your GPU, it is realisticly not a huge performance upgrade from your old one.  You will see noticable improvements at high resolutions (2K +), but mimimal at low resolutions.  

Good luck!

Rich S.


  • 0

Thank you for the reply Rich.  I guess I should have provided more information in my original post.  I'm using the same monitor as my old rig which is an LG 1440p.  My in game settings are the same across both BF2042 and Fortnite where I run everything at 1440.  MSFS is similar but I've bumped a few setting up from Low to Medium. Perhaps Fortnite and BF are more CPU intensive whereas FS2020 is GPU?  Not really sure how to explain this one.

Doing a little more testing I see HWiNFO shows that all cores are reporting "core power limit exceeded" = YES during the CPU stress test.  Could that be what's causing the clock to drop down to 4.6ghz?

Thanks again for the response.


Planetary
  • 0

Hi BOTTLECAP,

It sounds like you are hitting voltage caps, limiting the max clock speed.  This is not surprising, as I had mentioned with more cores, more heat is produced, and more voltage is needed.  That would be where overclocking would come into play, to increase that.  Since you have the K model, you can probablly use some of the built-in OC seetings in Command Center to see what happens.  However, unless you are striving to break benchmark records, this will not help much with what you are seeing.  Do you happen to know what your old system capped out on, clockspeed wise?  My older Alienware R8 with a Core i7 8700K (same 6-cores as your old one) capped out at aroundn 4.2 GHz most of the time (due to thermal and voltage limitations), while my current R11 which has a Core i7 10700K (8 cores like your current one) caps at about where you are seeing (4.6 Ghz).  Which is an improvement, but not huge.  These days we are mostly bound by I/O and GPU workloads rather than CPU workloads, which is why overclocking your CPU won't make a huge difference.

What I said about the GPU still stands.  At only 1440P, your 3090 will not be challanged at all.  Bump the settigns in all of your games to High or Max, and should see very little (if any) difference in your current framerates.  Your GPU is being "starved" for data at 1440P, which is why your framerates didn't change much.  The numbers you reported are stil good. The RTX 3090 is really targeting 4K (2160P) and higher resolutions, and at those resolutions you will see huge gains over your older card.  If your 3DMark scores seem to match your hardware, that is good confirmation that your system is working as expected.  Check it against mine (I only have a 10th Gen CPU, and a 3080), should be higher: https://www.3dmark.com/spy/22097111

Good luck!

Rich S.


  • 0

This is great info, thank you Rich. 

I believe my old system was maxing out at ~3.6ghz but it was over 3 years ago that I really paid any attention so please don't quote me on that.  After that "new car" feel wears off, and performance seems to stay reasonably stable, you just go with what you got.  I wasn't hitting any thermal or power limits during my initial testing though, I do remember that.  I was also running it in a full size tower with lots of cooling so that probably helped.

You're bench scores help tremendously.  Here is my TimeSpy score.  My CPU bench is significantly lower than yours at 7717.

https://www.3dmark.com/3dm/71586494?

I feel like performance is not where it should be at stock speeds and voltages.  I was thinking I should try to work out this issue before looking at OCs.

I think this systems could use some additional cooling so I'm trying to figure out where I would get the best bang for my buck with that.  If it's power throttling then perhaps the VRM needs more air?  This R12 does have the VRM heatsink installed so maybe another front fan (I didn't opt for the mechanical drive) and a push/pull on the radiator would do the trick?  I would probably re-paste the water block at the same time for good measure.

I'm not apposed to a full case swap either but I'd like to flip this system in a couple of years if GPU prices return to MSRP.  So making this case work better (if possible) is my first choice.

I've also considered replacing the 16gb stick with a 16gb RAM kit to take advantage of the dual channels.  Do you think this would net me any perceivable performance increase?

 

Thanks again, you have been very helpful!

 


Planetary
  • 0
B0TTLECAP said:

 

I've also considered replacing the 16gb stick with a 16gb RAM kit to take advantage of the dual channels.  Do you think this would net me any perceivable performance increase?

 

Thanks again, you have been very helpful!

 

Hi BOTTLECAP,

Yes!  By only having single-channel memory, you are starving both your CPU and GPU for data (what I called I/O issues earler).  Single-channel memory would certainly gimp you.  I am appalled that Alienware would sell such a high-end system with single channel memory these days.  You could either try to upgrade to 32 GBs, by adding a second stick, or you might be able to get Alienware to swap out your 1 16 GB DIMM for 2 8 GB DIMMs, for little to no cost to you.  Won't hurt to ask!  Since your CPU is already watercooled, I would try the dual-channel upgrade before you consider any modifications like cooling or new case.  That might be all you need :-).  Good luck!

Rich S.


  • 0

This is so helpful, thank you Rich.  Yes, shipping with a single channel is a bit strange.  Perhaps Dell is experiencing supply shortages and just build with what they have on the shelf?

As I consider which way to go with the RAM upgrade, I see that the current 16gb stick is only running at 2926mhz ( 1463 double data rate) under XMP1.  However the stick itself is rated at DDR4 3200.  The latency seems a bit mediocre with timings of 19 19 19 47 as well.

As I mentioned in my OP this is the stock config from Dell. Should I try the XMP2 profile to see if I can better speed and latency?

If I can't, would I be better off to shoot for a new kit of DDR4 3600 with better timings rather than another one of these stock DIMMs?

 

Thank you again!

 

 


Planetary
  • 0

Hi BOTTLECAP,

The lower latency and lower timing DDR4 memory is less important than being dual-channel.  I wouldn't bother worrying about XMP2 with only single-channel, this is a pure bandwidth limitation you are seeing here.  The CPU and GPU are not able to stretch their legs as memory isn't getting data to them fast enough.  Your best bet is to buy a high-quality dual-channel kit.    Sometimes these Alienware motherboards are finicky when it comes to compatible memory.  Make sure you get a kit that is compatible with your system, and remember higher than DDR4-3200 will require you to "overclock" to use.  You may have to use XMP1 to get DDR4-3200 anyway, but that is what the memory controller on the 11900K is rated at anyway, so technically that is not overclocking.  I suspect you will see some improvements just by goign to dual-channel.  Good luck!

Rich S.


  • 0

Rich, thank you for the help and willingness to share your knowledge and experience.  Step 1 will definitely be a new RAM kit.

Regards


  • 0

So now I know why you were "appalled" at the single channel config.  I am appalled too! 

I borrowed a 16gb Corsair Vengeance kit from a friend just to try it out.  BF2042 frames shot up from 60-70 FPS to 115-120 FPS.  Fortnite shot up to low 200s FPS and MSFS up to 45-50 FPS.  I feel confident bumping the in game settings up now for a better experience.  Thanks again for the recommendation.

Rich, I have another unrelated question for you though.  My CPU idles at 5ghz.  Is that normal?  I thought on the 11900kf the idle speed was 3.5ghz and it would boost up to 5ghz for short periods when needed?  Am I misunderstanding how base and boost speeds are supposed to work?

Thank you again!

 


Planetary
  • 0
B0TTLECAP said:

snip...

Rich, I have another unrelated question for you though.  My CPU idles at 5ghz.  Is that normal?  I thought on the 11900kf the idle speed was 3.5ghz and it would boost up to 5ghz for short periods when needed?  Am I misunderstanding how base and boost speeds are supposed to work?

Thank you again!

 

Hi BoTTLECAP,

I'm glad that the RAM did the trick.  It is criminal to outfit any modern system with single-channel RAM these days.  As you observed, it can really bottleneck performance.  With regards to your question...  Intel advertises "Base Speed" and "Turbo Speed" for processor these days.  There is another set of speeds that are not advertised, with relate to the max allowable Turbo and number of active cores.  The more cores active, typicaly the lower the allowed turbo.  In a desktop (not a laptop), the CPU will run as fast as  it can based on the current temperature and voltage invironment.  If the system isn't drawing massive amounts of voltage and is running relatively cool, it will "turbo up" and run up to its maximum all-core Turbo Boost speed.  So (again only in a desktop environment), the system will seem to idle at a much higher speed than base clock.  This is what you should see, if you only see your base clock at idle, it means that your cooling or voltage conditions are sub-optimal, or you have your system in a engery conversation mode (or set as a laptop).  A laptop will typically run at much lower speeds at idle (I have seen them throttle all the way dwon to 200MHz!), and then exhibit the behavior you describe, and turbo up to the maximum speed allowed by the current temp and voltage environment.  Hope this helps!

Rich