T O P

  • By -

joe0185

>hoping the devs explain how the game detects the P cores. There are APIs that expose that information. https://www.intel.com/content/www/us/en/developer/articles/guide/12th-gen-intel-core-processor-gamedev-guide.html#inpage-nav-1-5-2 https://github.com/GameTechDev/HybridDetect


wtallis

Getting the information on core types is easy. The hard part is doing something productive with that information, in a way that doesn't come back to bite you in a generation or two when people try to run the game on chips that don't behave the same as Alder Lake.


joe0185

Yeah, that's true you wouldn't want to make assumptions about the system by default. The Prioritize P-Core feature is not enabled by default in Cyberpunk.


WHY_DO_I_SHOUT

Even better, game developers can call [GetLogicalProcessorInformationEx](https://learn.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getlogicalprocessorinformationex). [\_PROCESSOR\_RELATIONSHIP.EfficiencyClass](https://learn.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-processor_relationship) separates P-cores and E-cores.


[deleted]

Now add direct storage and cyberpunk basically on another technological level compared to other games


masterfultechgeek

>Now add direct storage and cyberpunk basically on another technological level compared to other games So not the same thing but if you want to brute force responsiveness, slap on primocache and have RAM as L1 cache and maybe some optane as L2 cache and game assets tend to pop in VERY VERY quickly. VERY low latency and generally better bandwidth since the reads are distributed across 3 sources with reads disproportionately coming from low latency sources. Having TONS of RAM also helps. DS is the more elegant long term solution though.


letsgoiowa

Primocache is hugely underrated, especially for those mid-gen games that want a massive amount of storage space, but also don't always require an SSD. I use a similar solution (remember FuzeDrive?) for my 8 TB HDD and a 120 GB SSD to basically intelligently get SSD speeds but just yeet all my games onto an 8 TB drive. Keep in mind this mixed HDD+SSD setup made a ton more sense in the ~2015 era.


masterfultechgeek

>PKeep in mind this mixed HDD+SSD setup made a ton more sense in the \~2015 era. Yep. My main NAS is 16TB of HDDs in RAID 5, 32GB RAM (L1) and 118GB Optane (L2) and it breezes through things. Caching is amazing and solves many problems. At the same time... I bought a bunch of 8TB SSDs so... yeah, next NAS will just be SSDs + RAM and that should be fine. No need to "desperately" try to avoid hits to the disk to avoid poor latency when the disk performance is "not bad." MORE RAM + SSD > moderate RAM + tons of optane + HDD I'll still probably use Optane for metadata and a write SLOG.


Eastrider1006

Primocache sadly has enormous performance degradation issues in the mid term when caching to an SSD. I think its partition cannot run trim, but I'm not sure of the culprit.


masterfultechgeek

Optane doesn't do or need TRIM. It's got something like 1 million times the endurance of a NAND SSD. It's still pretty much the best caching technology on the market. The main problem with the technology is its cost structure. But the fire sale prices for it following discontinuation are pretty nice. Also for consumer workloads, it's mostly reads. If you're doing ZFS or Primocache it can actually be hard to get the cache filled.


capn_hector

Optane doesn’t have performance degradation when filled, nor does it need trim. That’s why it works as a memory dimm - it’s fundamentally a big flat non-volatile memory, with none of the foibles of flash like paging or blocks. Write it and that’s it.


letsgoiowa

Can you elaborate on that further? Got any percentage figures or real world things you've seen?


Eastrider1006

Just using it normally myself, I don't remember numbers but it was harshly noticeable in CDM or doing small file transfers. I guess you could run it for a while and benchmark it the first day and three months in after it being used as torrent cache or something like that.


letsgoiowa

Would it resolve if you basically rebuilt the structure every 6 months or so?


Eastrider1006

Yeah, but that's, imo, extremely annoying, and not a solution if you don't know what you're doing. I no longer need it as I moved to all-ssd. But I'm wary of recommending Primo to non tech savvy people for this reason.


virtualmnemonic

Yeah, SSD I/O access time bottlenecks aren't going away. There hasn't been much improvement even since SATA days. RAM, on the other hand, has nearly instant access times, and it's cheap enough to expand if you really want extra performance. Unfortunately, high end consumer chips like the 14900k and 7950x have trouble utilizing more than 2 sticks of RAM, so you hit another wall fairly quickly.


liaminwales

Direct Storage looks like something that may never matter, when do you have loading problems in Cyberpunk? In most games today it's just not a problem.


HavocInferno

One of the biggest visual issues in Cyberpunk imo is the pop-in (well, fade-in I guess). Drive fast down any street and you can see the low distance at which lamps, poles, clutter, etc fade in, even at high settings. Now, fair enough, that is in no small part down to render budget as well, but I can't help but wonder if it's also the legacy of a loading system targeting HDDs.


liaminwales

Is pop in a disc loading problem? It may be a aggressive way to optimise or use less VRAM etc. When I play Cyberpunk I have all my stats on the second display, my NVME drive is not hit hard during gameplay. I keep my OS and games on different NVME drives so it's fairly easy to track disc use, I dont understand it may not be raw read speed but response time but still it seems fairly low. It looks like it's mostly loading that relay hits the drive. At a guess the LOD stuff it to keep the game under 8GB VRAM that most GPU's have, devs do need to target the most used GPU's.


HavocInferno

>Is pop in a disc loading problem? Oh I'm playing off an nvme SSD with a 3900X and 6900XT. But even then the fade-in is so aggressive that it conflicts with immersion when driving around. Those things won't hit the ssd hard, it's just small reads after all. Certainly mostly LOD related. But at max settings, I'd still expect farther distance. And so I can't help but think it's related to the game being designed for a HDD baseline (last-gen consoles) where that entire system for streaming/caching assets has fairly tight restrictions on distance and LOD.


liaminwales

I think LOD is more VRAM than drive speed, the more stuff on screen the more has to be in VRAM. From watching disc use during the game, I just dont see a lot of activity. I am not a dev but it looks like VRAM, when the game came out (and still today) most GPU's only have 8GB of VRAM. RTX 20XX/30XX line had a lot of 8GB VRAM GPU's that will have been the target for the game.


Substance___P

I agree. I think one optimization popular in modern games is the shortening of when certain objects render or switch from long-distance to close-up models. Pop-in is a plague in a lot of games. Subnautica to this day is an eye sore because of close to camera pop-in.


Gawdsauce

Direct Storage will matter, development of Cyberpunk began long before Direct Storage was made available for general use, you'll likely not see many games implementing this feature for the next couple years as developers learn the new APIs and integrate them into their new development workflow. These things aren't a simple toggle switch.


WhoTheHeckKnowsWhy

> Now add direct storage and cyberpunk basically on another technological level compared to other games which is ironic considering [CDPR is moving to Unreal](https://en.wikipedia.org/wiki/CD_Projekt#REDengine) as they've deemed their own inhouse RED engine too long in the tooth and at the limits. Meanwhile Bethesda will likely still be plugging along with ther ancient hairball in a decade.


RawbGun

> Meanwhile Bethesda will likely still be plugging along with ther ancient hairball in a decade. Who doesn't love loading times when getting inside any building or ship in 2023!


NegaDeath

I know I... **Loading** do!


onesliv

I think the problem was attrition for them - they had to train new people on an engine they had never seen before, leading to problems and long ramp ups. You can hire people with unreal experience, but almost nobody outside of cdpr would have redengine experience.


Rocket_Puppy

Very true. Multi lingual support was a nightmare for them as well. They went from a smallish studio to AAA development firm very quickly, lots of core documentation to translate from polish to expand their hiring net. Unreal engine has a lot of handy stuff built into it if you want to hire across language barriers.


capn_hector

That’s always been implicit right from the start - go back to 2016 and you will find plenty of people saying that it’s a good thing because game engines are incredibly esoteric and finicky and require very specialized skill sets, so obviously over time that is going to lead to consolidation anyway etc, so why isn’t it better for studio X to spend time doing the thing that delivers value for them as a specialized business (making game content) and not the engine? Do you want badly optimized dx11 engines forever? And then the time came to pay the check and companies realized this meant forking over 5% of gross revenue off the top… and it turns out if you’re EA then 5% of however many billion in gross revenue ends up being quite a lot of specialized engine developers… It’s weird/funny that it’s become an issue when it was such a selling point at the start.


No_Ebb_9415

> Meanwhile Bethesda will likely still be plugging along with ther ancient hairball in a decade. The worst part is, that it's financially speaking, likely the right thing to do. I doubt it's impacting their sales, and they aren't paying any fees. FromSoft with Eldenring is the same.


Rossco1337

Didn't this feature have a significant performance impact in Ratchet and Clank? I remember seeing a post about someone deleting the dll and getting a huge performance boost while the game looked the same. It feels like people have been hyping up direct storage for gaming since 2018-2019 and that's the only game I can name which implemented it.


Ibiki

It already is far beyond, the path tracing mode is insane in how good it looks (and if you think about all technological advancements in the last years needed to make it possible)


roionsteroids

Cyberpunk has super short loading times. And uses some more or less good systems like 2D models for cars and people far away in the background already. Pretty sure the limit is VRAM more than anything else here (and of course the gazillion small bugs in the game).


gordoncheong

I'm not sure if it's working properly. I just tested it and turning on prioritizing p-cores causes major stutters and framerate drops. I got avg 56 fps with it off and avg 52 fps with it on. 13600k with 2080ti on custom settings.


pathologicalOutlier

Exact same experience on a 13900 with 3090ti. Stutterfest.


pathologicalOutlier

Found a workaround: disabling e-cores in bios solves it.


jonydevidson

😂😂 > Pays for 16 e cores >Turns them off Just get an X3D CPU.


pathologicalOutlier

Pretty dumb remark. I need the cores for other workflows, just not for when I play this game. Not sure what the issue is. It’s like saying you watch a lot of Netflix on tv, why not buy a 24hz TV since movies are shot like that.


RunTillYouPuke

The issue is you have to go to the bios like a caveman.


pathologicalOutlier

It’s just a workaround. If I don’t feel like going then I’ll deal with the stutters


jonydevidson

It's a joke. Going into bios to do it is not fun.


Nauzhror_

Same thing on my laptop. ​ i9-13980HX & RTX 4090 Mobile, I usually game at a higher resolution, but changed to 1080P Ultra (no RT) just to try and make it cpu-bound so I'd see a potential difference. ​ I saw a difference alright. Before changing it I got 142 fps, after changing it I got 90 fps and a ton of stuttering.


KlutzyDescription839

Tried the "Prioritize P-Cores" with my 14900KF but it creates heavy stutter in game (and also audio stutter during the loading screen).


Brave-Yesterday-387

same. I'm getting intermittent stuttering in frame rate and audio with prioritize enabled, with 13600-kf


pathologicalOutlier

Correct. Same here.


jerryfrz

Oh well, guess it's par for the course for this game lol


[deleted]

Too bad they're abandoning the engine in favour of unreal, this is such a full featured program.


Hendeith

I too think it's a shame. It's such a knee jerk reaction to drop whole engine (and 2nd Cyberpunk DLC) due to release issues that were caused primarily by mismanagement.


jtmackay

Or you know... They could know way more about their engine than you and they see too many problems. I love how this sub thinks they always know more than devs.


cortlong

this engine is definitely a mess under the hood. no doubt about it. ive made minor tweakdb edits that have made cars explode for no reason. its pretty wonky.


Hendeith

Or, hear me out, you can stop speculating and just look up what ex CDP Red employees said - and it matches with what I heard personally from people that worked there. Just because you are not informed on this topic doesn't mean others are too. They always spent significant resources upgrading engine between games (duh, that's why Cyberpunk looks great). However same mismanagement that caused Cyberpunk to be released in poor state, same mismanagement that caused insane crunch during Witcher 3, also caused issues with engine. Because when you pull deadlines our of your ass, push people to do everything faster and faster, then quality will suffer. Dropping their own engine to use UE won't solve issues that were caused by poor management.


callanrocks

Mismanagement is doubling down and taking the path of least resistance instead of invest in employee retention and institutional knowledge. Can't wait for Cyberpunk 2 to be a complete fucking mess on launch for the exact same reasons.


anival024

> release issues that were caused primarily by mismanagement The game had a very long development cycle, massive budget, and had 3 or 4 delays after the release date was initially announced. Work began in early 2012, and it was announced in May of 2012. It launched **8.5 years** later, and overall has had a budget of between $400 and $450 million for the game and marketing, post launch support, and expansion. It's the second most expensive video game ever made. Yet even today the game is no where near what was initially promised. It's just less glaringly buggy. The actual game is mediocre and shallow, at best. No amount of additional delays were going to fix that. It's basically just a benchmark title for raytracing now. The only thing you could realistically fault management for would be not axing the PS4 and Xbox One versions. At some point, you have to admit the people actually making the game just weren't capable of getting it done. The budget wasn't a real constraint. Nor was the time frame. You can blame marketing for the unrealistic explanations, but at the end of the day they weren't the ones making the BS previews/trailers. Still, the game has sold over 25 million copies and raked in a ton of cash. There's no way you can honestly call that "mismanagement".


wtallis

I wonder if this changes what kind of work gets scheduled on E cores, or if this just makes the game avoid using E cores at all and maybe reduces the number of worker threads the game spawns to match the number of P cores.


Soulstar909

How is this not a standard already? I let my Win10 upgrade to 11 just for better hybrid CPU support for games and now I'm finding out it's still something that needs to be supported by the game itself? Wtf...


Sculpdozer

12700KF here. Enabling any setting exept "auto" causes absolutely horrible stuttering and freezes. Not subtle either. Game freezes for 4 to 8 seconds sometimes. Its almost funny how bad it is.


Nicholas-Steel

Tried quitting to desktop after making the change?


woodsgebriella

Interesting addition. I wonder if prioritizing performance cores will significantly improve frame rates in CPU-bound situations.


Winter_Reception_654

I got major stuttering when I turned it on w/ a 13700k. Was excited when i saw it, lol but that excitement turned to disappointment when all the stuttering started AND major dips in fps. Even after switching back to auto my fps is still lower. :( Considering all of us with P cores are having this issue, what exactly was the point of this?! You'd think they tested it.


MedicalAd7594

Dunno if it will fix it for you. But Turning off JB Third person mod fixed the microstuttering for me.


DorkasaurusRexx

As someone who helped work to bring this patch to the game due to my ongoing issues with crashes on Cyberpunk 2077 with an i9-13900k, allow me to explain: Many, many modern games, including Returnal, Remnant from the Ashes 2, and a slew of other modern games, do not know how to schedule and prioritize the workload effectively across P and E cores. You will get either random crashes or error notifications about a "lack of video memory," even while running a 4090. ​ These crashes are very unfortunate, and the fix remains constant across all these games, including Cyberpunk 2077: A manual downclock of about 200-300mhz across all P cores done through Intel Extreme Tuning Utility (Intel XTU). Now, with these fixes in place, this enables users to play the game with the full processing power of their 12th, 13th, and 14th gen processors, without thermal throttling, poor performance, crashes, and without having to run an entire separate program just to downclock their CPUs. If it introduced stuttering, that is unfortunate, but Cyberpunk deserves credit for being the only game so far to even acknowledge this widespread problem exists, let alone implement an actual solution to fix it. I am sure further tuning and tweaking will be needed to prevent micro stuttering other users are claiming to have since patch 2.11, but simply turning off the Hybrid CPU Utilization option, or leaving it to "auto" should restore behavior and performance to how it was in the prior patch.


SkillYourself

>These crashes are very unfortunate, and the fix remains constant across all these games, including Cyberpunk 2077: A manual downclock of about 200-300mhz across all P cores done through Intel Extreme Tuning Utility (Intel XTU). I'm skeptical that your random crashes are caused by architecture problems because *everyone* should be able to reproduce them in that case. Your fix sounds like the P-cores simply aren't getting enough voltage. The way that Intel power delivery works is not intuitive: the motherboard configures a voltage request slope in the CPU and the voltage droop response to the VRM controller. Most Z-series boards configure something like 0.5-0.7 CPU/1.1 VRM out of the box meaning the CPU voltage undershoots the stock 1.1/1.1 voltage by 4-6mV for every 10A. In other words, reducing CPU load by not scheduling on the E-cores will increase the delivered Vcore without touching any voltage setting. Are you able to run y-cruncher HNT & VST for 10 minutes each at 253W package power?


DorkasaurusRexx

I mean, I am able to reproduce them, and I can show posts from hundreds and hundreds of users online who can reproduce them. There is a credible technical explanation,. as well: http://www.radgametools.com/oodleintel.htm


SkillYourself

This is the Oodle technical explanation. >As far as we can tell, there is not any software bug in Oodle or Unreal that is causing this. *Due to what seem to be overly optimistic BIOS settings*, some small percentage of processors go out of their functional range of clock rate and power draw under high load, and execute instructions incorrectly. Which is the same as my explanation: your CPU is unstable due to the motherboard vendor undervolting by default. Going back to my suggestion - run y-cruncher HNT & VST at 253W. If it's unstable at those motherboard stock settings, you can either fix it by applying a small positive voltage offset or file an RMA with Intel to roll the silicon lottery again.


Nicholas-Steel

I experienced something similar with my old Ryzen 3700X, it'd suffer random cascading CPU errors (recorded in Windows Event Logs) that'd quickly result in either a system deadlock or hard reboot of the PC. I had to change the following settings in the BIOS from AUTO to what I think was their stock value to stabilize the system (has been stable for 3 years since changing them): * VSOC (SVI2): 1.1000v * CLDO: 0.9000v * VDDG: 0.9500v I worked off of the currently operating voltages the BIOS was displaying, so some of the values were set ever so marginally (0.0025 or 0.0050 MHz) *higher* than those I mentioned to get the currently operating values at or above those listed values. With my Ryzen 5800X3D which I bought late last year I'm so far experiencing no instability with these 3 settings left on AUTO (same motherboard).


SkillYourself

This has become an increasing problem with the last few generations of CPUs since both AMD and Intel are shipping top-bin chips with smaller voltage&frequency margins and then motherboard vendors are going ahead and undervolting them to gain an edge on their competitors. Silicon lottery loser chips that are also running hot are pushed over the edge and crash. An CPU RMA would likely fix the issue by simply being a better bin.


Nicholas-Steel

There's been several BIOS updates during those 3 years and I never tested if any of them improved the behaviour of the AUTO setting for those voltages. It's an Asus Crosshair VIII Hero WiFi motherboard.


take17easy

Something is broken with the way the feature is implemented. Makes the game run horribly on my i7 12700k once enabled, a hot fix is probably coming in a day or two


HisDivineOrder

I just use Process Lasso. Works fine.


Sculpdozer

Yep, microstuttering is now a thing. Everything was fine before the update, now game feels a tiny bit worse than before. Its not horribly bad but still noticable. I hope devs will fix it soon. I suspect new P-core setting is partialy responsible for that, but I'm not sure.


PetrichorAndNapalm

I dont get why this would even be necessary really.


jerryfrz

Same reason why Intel created the APO: the thread director and Windows scheduler aren't good enough.


Noreng

APO limits the CPU threads the game detects, which hinders bottlenecks from excessive multithreading.


virtualmnemonic

This. By default, apps/games will execute threads on e-cores if all p-cores are utilized. But sometimes it's actually more performant to just wait on a p-core than switch to e-cores, especially because of latency and how cache is shared between cores.


Noreng

It's not necessarily that it's faster to wait on a P-core, but rather that splitting tasks up into smaller pieces means you need more time to assemble the pieces back in the correct order.


auradragon1

It isn't just the schedular. It's the fact that the little cores are added in for niche MT use cases. They aren't efficiency cores. They aren't performance cores. They're middle cores added for area efficiency to win in benchmarks like Cinebench. In a traditional big.Little setup, it's obvious that the big cores should be doing the gaming. But it's not obvious in Intel's setup. Very few applications can take advantage of more than 8 cores. For some games, it's faster to just turn off all the little cores. I think it's a waste of transistors.


virtualmnemonic

> Very few applications can take advantage of more than 8 cores. By this logic, anything beyond 8 cores is a waste, including the 16 core 7950x. I do agree that most users (including gamers with a 4090) won't see a benefit to more than 8 cores for quite some time. But that doesn't mean extra cores as a waste of transistors. Is it a waste of an engine to build it to go beyond 70 mph, given that 99% of the time it will be running at or below 70 mph? ...Then again, as a programmer with a 13900k, it's rare I push beyond 50% usage simply because the 13900k is so powerful that other bottlenecks (IO access time mainly) stop it from being fully utilized. But at least we can rejoice that CPUs have made massive progress in both performance and performance per dollar. A modest 13th gen i3/i5 will absolutely destroy regular computing tasks.


devnull123412

Jokes on you, I disable e-cores in BIOS.


Radiant_Covenant

Any chance if they will port these to W3EE?


Nicholas-Steel

A better idea than the previous idea of arbitrarily limiting number of CPU's to use (which is I think something they have done for The Witcher 3).


KappaPrideRider

it's intel only and doesn't work, amd has SMT which actually works


HemphillD

Does this company bother to test things before releasing patches? As many others are experiencing, it introduced massive lag and stutter on my i9-13900k.


Kesuri

Some people say you get better FPS with it 'off' but I can't see an 'off' setting. It's either 'on' or 'auto'.


jerryfrz

Auto is off.


Kesuri

um no auto means auto :P IE: the game enables or disables it automatically based on some kind of criteria. OFF would be off. The point is, it's causing lag spikes for some people, so if the fact it's on auto means the game still sometimes enables it, then it explains why I am sometimes getting lag spikes and sometimes am not.


rckrz6

game barely even runs if i change it from auto on a 13700k


kamalamading

Using an i7 12700kf the game gets choppy when I switch it from „Auto“ to „prioritize P-cores“, so I leave it on „Auto“


SizeZealousideal1919

This option introduced a ton of lag spikes and audio hitching. I have an I713700K + 4090. I don't use mods. The lag spikes and audio hiccups were reduced after I disable path tracing. However, they completely went away when I switched the HYBRID CPU UTILIZATION option in SETTINGS under GAMEPLAY to PRIORITIZE P-CORES. This option completely removed the visual and audio hitching and I was able to turn PT back on. Just in case anyone reads this from this point on. I don't think this feature works very well. It still needs work.