"These price increases have multiple intertwining causes, some direct and some less so: inflation, pandemic-era supply crunches, the unpredictable trade policies of the Trump administration, and a gradual shift among console makers away from selling hardware at a loss or breaking even in the hopes that game sales will subsidize the hardware. And you never want to rule out good old shareholder-prioritizing corporate greed.
But one major factor, both in the price increases and in the reduction in drastic “slim”-style redesigns, is technical: the death of Moore’s Law and a noticeable slowdown in the rate at which processors and graphics chips can improve."
Are they tho? Have you seen graphics card prices?
My 4070 cost $300 and runs everything.
The whole PC cost around $1000, and i have had it since the Xbox One released.
You can get similar performance from a $400 steam deck which is a computer.
You don’t need a top end card to match console specs, something like a 6650XT or 6700XT is probably enough. Your initial PC build will be more than a console by about 2X if you’re matching specs (maybe 3X if you need a monitor, keyboard, etc), but you’ll make it up with access to cheaper games and being able to upgrade the PC without replacing it, not to mention the added utiliy a PC provides.
So yeah, think of PC vs console as an investment into a platform.
If you only want to play 1-2 games, console may be a better option. But if you’re interested in older or indie games, a PC is essential.
2060 super for 300, and then another 200 for a decent processor puts you ahead of a ps5 and for a comparable price. Games are cheaper on PC too, as well as a broader selection. https://pcpartpicker.com/list/zYGmJn here is a mid tier build for 850, you could cut the procesor down, install linux for free, and im sure youve got a computer monitor laying around somwhere… the only thing stopping you is inertia.
you’re going to have to really scrunge up for deals in order to get psu, storage, memory, motherboard, and a case for your remaining budget of $0.
This is $150 more expensive and the gpu is half as performant as the reported PS5 pro equivalent.
Ok so, for starters, your ‘reported equivalent’ source is wrong.
https://www.eurogamer.net/digitalfoundry-2024-playstation-5-pro-weve-removed-it-from-its-box-and-theres-new-information-to-share
The custom AMD Zen2 APU (combined CPU + GPU, as is done in laptops) of a PS5Pro is 16.7 TFLOPs, not 33.
So your PS5 Pro is actually roughly equivalent to that posted build… by your ‘methodology’, which is utterly unclear to me, what your actual methodolgy for doing a performance comparison is.
The PS5 Pro uses 2 GB of DDR5 RAM, and 16 GB of GDDR6 RAM.
This is… wildly outside of the realm of being directly comparable to a normal desktop PC, which … bare minimum these days, has 16 GB DDR4/5 RAM, and the GDDR6 RAM would be part of the detachable GPU board itself, and would be … between 8GB … and all the way up to 32 if you get an Nvidia 5090, but consensus seems to be that 16 GB GDDR6/7 is probably what you want as a minimum, unless you want to be very reliant on AI upscaling/framegen, and the input lag and whatnot that comes with using that on an underpowered GPU.
Short version: The PS5Pro would be a wildly lopsided, nonsensical architecture to try to one to one replicate in a desktop PC… 2 GB system RAM will run lightweight linux os’s, but not a chance in hell you could run Windows 10 or 11 on that.
Fuck, even getting 7 to work with 2GB RAM would be quite a challenge… if not impossible, I think 7 required 4GB RAM minimum?
The closest AMD chip to the PS5 Pro that I see, in terms of TFLOP output… is the Radeon 7600 Mobile.
((… This is probably why Cyberpunk 2077 did not (and will never) get a ‘performance patch’ for the PS5Pro: CP77 can only pull both high (by console standards) framerates at high resolutions… and raytracing/path tracing… on Nvidia mobile class hardware, which the PS5Pro doesn’t use.))
But, lets use the PS5Pro’s ability to run CP77 at 2K60fps on … what PC players recognize as a mix of medium and high settings… as our benchmark for a comparable standard PC build. Lets be nice and just say its the high preset.
(a bunch of web searching and performance comparisons later…)
Well… actually, the problem is that basically, nobody makes or sells desktop GPUs that are so underpowered anymore, you’d have to go to the used market or find some old unpurchased stock someone has had lying around for years.
The RX 6600 in the partpicker list is fairly close in terms of GPU performance.
Maybe pair it with an AMD 5600X processor if you… can find one? Or a 4800S, which supposedly actually were just rejects/run off from the PS5 and Xbox X and S chips, rofl?
Yeah, legitimately, the problem with trying to make a PC … in 2025, to the performance specs of a PS5 Pro… is that basically the bare minimum models for current and last gen, standard PC architecture… yeah they just don’t even make hardware that weak anymore.
EDIT:
oh final addendum: if your tv has an hdmi port, kablamo, thats your monitor, you dont strictly need a new one.
And there are also many ways to get a wireless or wired console style controller to work in a couch pc setup.
It’s shared memory, so you would need to guarantee access to 16gb on both ends.
I don’t know how you could arrive at such a conclusion, considering that the base PS5 has been measured to be comparable to the 6700.
So… standard Desktop CPUs can only talk to DDR.
‘CPUs’ can only utilize GDDR when they are actually a part of an APU.
Standard desktop GPUs can only talk to GDDR, which is part of their whole seperate board.
GPU and CPU can talk to each other, via the mainboard.
Standard desktop PC architecture does not have a way for the CPU to directly utilize the GDDR RAM on the standalone GPU.
In many laptops and phones, a different architecture is used, which uses LPDDR RAM, and all the LPDDR RAM is used by the APU, the APU being a CPU+GPU combo in a single chip.
Some laptops use DDR RAM, but… in those laptops, the DDR RAM is only used by the CPU, and those laptops have a seperate GPU chip, which has its own built in GDDR RAM… the CPU and GPU cannot and do not share these distinct kinds of RAM.
(Laptop DDR RAM is also usually a different pin count and form factor than desktop PC DDR RAM, you usually can’t swap RAM sticks between them.)
The PS5Pro appears to have yet another unique architecture:
Functionally, the 2GB of DDR RAM can only be accessed by the CPU parts of the APU, which act as a kind of reserve, a minimum baseline of CPU-only RAM set aside for certain CPU specific tasks.
The PS5Pro’s 16 GB of GDDR RAM is sharable and usable by both the CPU and GPU components of the APU.
…
So… saying that you want to have a standard desktop PC build… that shares all of its GDDR and DDR RAM… this is impossible, and nonsensical.
Standard desktop PC motherboards, compatible GPUs and CPUs… they do not allow for shareable RAM, instead going with a design paradigm of the GPU has its own onboard GDDR RAM that only it can use, and DDR RAM that only the CPU can use.
You would basically have to tear a high end/more modern laptop board with an APU soldered into it… and then install that into a ‘desktop pc’ case… to have a ‘desktop pc’ that shares memory between its CPU and GPU components… which both would be encapsulated in a single APU chip.
Roughly this concept being done is generally called a MiniPC, and is a fairly niche thing, and is not the kind of thing an average prosumer can assemble themselves like a normal desktop PC.
All you can really do is swap out the RAM (if it isnt soldered) and the SSD… maybe I guess transplant it and the power supply into another case?
I can arrive at that conclusion because I can compare actual bench mark scores from a nearest TFLOP equivalent, more publically documented, architecturally similar AMD APU… the 7600M. I specifically mentioned this in my post.
This guy in the article here … well he notes that the 6700 is a bit more powerful than the PS5Pro’s GPU component.
The 6600 is one step down in terms of mainline desktop PC hardware, and arguably the PS5Pro’s performance is… a bit better than a 6600, a bit worse than a 6700, but at that level, all of the other differences in the PS5Pro’s architecture give basically a margin of error when trying to precisely dial in whether a 6700 or 6600 is a closer match.
You can’t do apples to apples spec sheet comparisons… because, as I have now exhaustively explained:
Standard desktop PCs do not share RAM between the GPU and CPU. They also do not share memory imterface busses and bandwidth lanes… in standard PCs, these are distinct and seperate, because they use different architectures.
I got my results by starting with the (correct*) TFLOPs output from a PS5Pro, finding a nearest equivalent APU with PassMark benchmark scores, reported by hundreds or thousands or tens of thousands of users, then compared those PassMark APU scores to PassMark conventional GPU scores, and ended up with ‘fairly close’ to an RX 6600.
…
You, on the other hand, just linked to a Tom’s Hardware review of currently in production desktop PC GPUs… which did not make any mention of the PS5Pro… and them you also acted as if a 6600 was half as powerful as a PS5Pro’s GPU component… which is wildly off.
A 6700 is nowhere near 2x as powerful as a 6600.
2x as poweful as an AMD RX 6600… would be roughly an AMD RX 7900 XTX, the literal top end card of AMDs previous GPU generation… that is currently selling for something like $1250 +/- $200, depending on which retailer you look at, and their current stock levels, and which variant of which partner mfg you’re going for.
Just to add to this, the reason you only see shared memory setups on PCs with integrated graphics is because it lowers performance compared to dedicated memory, which is less of a problem if your GPU is only being used in 2D mode such as when doing office work (mainly because that uses little memory), but more of a problem when used in 3D mode (such as in most modern games) which is as the PS5 is meant to be used most of the time.
So the PS5 having shared memory is not a good thing and actually makes it inferior compared to a PC made with a GPU and CPU of similar processing power using the dominant gaming PC architecture (separate memory).
You’ve got that a bit backwards. Integrated memory on a desktop computer is more “partitioned” than shared - there’s a chunk for the CPU and a chunk for the GPU, and it’s usually quite slow memory by the standards of graphics cards. The integrated memory on a console is completely shared, and very fast. The GPU works at its full speed, and the CPU is able to do a couple of things that are impossible to do with good performance on a desktop computer:
I… uh… what?
Integrated memory, on a desktop PC?
Genuinely: What are you talking about?
Typical PCs (and still many laptops)… have a CPU that uses the DDR RAM that is… plugged into the Mobo, and can be removed. Even many laptops allow the DDR RAM to be removed and replaced, though working on a laptop can often be much, much more finnicky.
GPUs have their own GDDR RAM, either built into the whole AIB in a desktop, or inside of or otherwise a part of a laptop GPU chip itself.
These are totally different kinds of RAM, they are accessed via distinct busses, they are not shared, they are not partitioned, not on desktop PCs and most laptops.
They are physically and design distinct, set aside, and specialized to perform with their respective processor.
The kind of RAM you are talking about, that is shared, partitioned, is LPDDR RAM… and is incompatible with 99% of desktop PCs
…
Also… anything, on a desktop PC, that gets loaded and processed by the GPU… does at some point, have to go through the CPU and its DDR RAM first.
The CPU governs the actual instructions to, and output from, the GPU.
A GPU on its own cannot like, ask an SSD or HDD for a texture or 3d model or shader.
(addition to the quote is mine)
Like… there is GPU Direct Storage… but basically nothing actually uses this.
https://www.pcworld.com/article/2609584/what-happened-to-directstorage-why-dont-more-pc-games-use-it.html
Maybe it’ll take off someday, maybe not.
Nobody does dual GPU SLI anymore, but I also remember back when people thought multithreading and multicore CPUs would never take off, because coding for multiple threads is too haaaaarrrrd, lol.
…
Anyway, the reason that emulators have problems doing the things you describe consoles a good at… is because consoles have finetuned drivers that work with only a specific set of hardware, and emulators have to reverse engineer ways of doing the same, which will work on all possible pc hardware configurations.
People who make emulators generally do not have direct access to the actual proprietary driver code used by console hardware.
If they did, they would much, much more easily be able to… emulate… similar calls and instruction sets on other PC hardware.
But they usually just have to make this shit up on the fly, with no actual knowledge of how the actual console drivers do it.
Reverse engineering is astonishingly more difficult when you don’t have the source code, the proverbial instruction manual.
Its not that desktop PC architecture … just literally cannot do it.
If that were the case, all the same issues you bring up that are specific to emulators… would also be present with console games that have proper ports to PC.
While occasionally yes, this is sometimes the case, for some specific games with poor quality ports… generally no, not this is not true.
Try running say, an emulated Xbox version of Deus Ex: Invisible war, a game notoriously handicapped by its console centric design… try comparing the PC version of that, on a PC… to that same game, but emulating the Xbox version, on the same exact PC.
You will almost certainly, for almost every console game with a PC port… find that the proper PC version runs better, often much, much better.
The problem isn’t the PC’s hardware capabilities.
The problem is that emulation is inefficient guesswork.
Like, no shade at emulator developers whatsoever, its a miracle any of that shit works at all, reverse engineering is astonishingly difficult, but yeah, reverse engineering driver or lower level code, without any documentation or source code, is gonna be a bunch of bullshit hacks that happen to not make your PC instantly explode, lol.
When two processing devices try and access the same memory there are contention problems as the memory cannot be accessed by two devices at the same time (well, sorta: parallel reads are fine, it’s when one side is writing that there can be problems), so one of the devices has to wait, so it’s slower than dedicated memory but the slowness is not constant since it depends on the memory access patterns of both devices.
There are ways to improve this: for example, if you have multiple channels on the same memory module then contention issues are reduced to the same memory block, which depends on the block-size, though this also means that parallel processing on the same device - i.e. multiple cores - cannot use the channels being used by a different device so it’s slower.
There are also additional problems with things like memory caches in the CPU and GPU - if an area of memory cached in one device is altered by a different device that has to be detected and the cache entry removed or marked as dirty. Again, this reduces performance versus situations where there aren’t multiple processing devices sharing memory.
In practice the performance impact is highly dependent on if an how the memory is partitioned between the devices, as well as by the amount of parallelism in both processing devices (this latter because of my point from above that memory modules have a limited number of memory channels so multiple parallel accesses to the same memory module from both devices can lead to stalls in cores of one or both devices since not enough channels are available for both).
As for the examples you gave, they’re not exactly great:
I don’t think that direct access by the CPU to manipulate GPU data is at all a good thing (by the reasons given on top) and to get proper performance out of a shared memory setup at the very least the programming must done in a special way that tries to reduce collisions in memory access, or the whole thing must be setup by the OS like it’s done on PCs with integrated graphics, were a part of the main memory is reserved for the GPU by the OS itself when it starts and the CPU won’t touch that memory after that.
Basically this is true, yes, without going into an exhaustive level of detail as to very, very specific subtypes and specs of different RAM and mobo layouts.
Shared memory setups generally are less powerful, but, they also usually end up being overall cheaper, as well as having a lower power draw… and being cooler, temperature wise.
Which are all legitimate reasons those kinds of setups are used in smaller form factor ‘computing devices’, because heat managment, airflow requirements… basically rule out using a traditional architecture.
…
Though, recently, MiniPCs are starting to take off… and I am actually considering doing a build based on the Minisforum BD795i SE… which could be quite a powerful workstation/gaming rig.
Aside about interesting non standard 'desktop' potential build
This is a Mobo with a high end integrated AMD mobile CPU (7945hx)… that all together, costs about $430.
And the CPU in this thing… has a PassMark score… of about the same as an AMD 9900X… which itself, the CPU alone, MSRPs for about $400.
So that is kind of bonkers, get a high end Mobo and CPU… for the price of a high end CPU.
Oh, I forgot to mention: This BD795iSE board?
Yeah it just has a standard PCI 16 slot. So… you can plug in any 2 slot width standard desktop GPU into it… and all of this either literally is, or basically is the ITX form factor.
So, you could make a whole build out of this that would be ITX form factor, and also absurdly powerful, or a budget version with a dinky GPU.
I was talking in another thread a few days ago, snd somekne said PC architecture may be headed toward… basically you have the entire PC, and the GPU, and thats the new paradigm, instead of the old school view of: you have a mobo, and you pick it based on its capability to support future cpus in the same socket type, future ram upgrades, etc…
And this intrigued me, I looked into it, and yeah, this concept does have cost per performance merit at this point.
So this uses a split between the GPU having its GDDR RAM and the… CPU using DDDR SODIMM (laptop form factor) RAM.
But its also designed such that you can actually fit huge standard PC style cooling fans… into quite a compact form factor.
From what I can vaguely tell as a non Chinese speaker… it seems like there are many more people over in China who have been making high end, custom, desktop gaming rigs out of this laptop/mobile style architecture for a decent while now, and only recently has this concept even really entered into the English speaking world/market, that you can actually build your own rig this way.
Fascinating discourse here. Love it.
What about a Framework laptop motherboard in a mini PC case? Do they ship with AMD APUs equivalent to that?
$850 is way more expensive than a PS5 though lol. Linux also means you can’t play the games that top the most played charts on the PS5 every single month of every single year.
https://www.metacritic.com/pictures/best-playstation-games-of-2024/
Works on Linux:
Prince of Persia, the Lost Crown
Silent Hill 2 (Remake)
Marvel vs Capcom: Arcade Classics
Shin Megamei Tensei (V)engeance
Persona 3 Reload
HiFi Rush
Animal Well
Castlevania Dominus Collection
Like A Dragon: Infinite Wealth
Tekken 8
The Last of Us Part II (Remaster)
Balatro
Dave the Diver
Slay the Princess: Pristine Cut
Metaphor Re Fantazio
Elden Ring: Shadow of the Erdtree (and base game)
Does not work on Linux:
Unicorn Overlord (Console Exclusive, No PC Port Allowed by Publisher Vanillaware)
Destiny 2 (Kernel Level Anti Cheat)
FF VII Rebirth (PS Exclusive)
Astro Bot (PS Exclusive)
…
Damn, yeah, still consoles gotta hold on via exclusives, I guess?
And then there’s the mismanaged shitshow that is Destiny 2…
…who can’t figure out how to do AntiCheat without installing a rootkit on your PC, despite functional, working AntiCheats having worked on linux games for at least half a decade at this point, if not longer…
…nor can they figure out how to write a storyline that rises above ‘everyone is always lore dumping instead of talking, and also they talk to you like a you’re a 10 year while doing so.’
Last I heard, a whole bunch of hardcore D2 youtubers and streamers were basically all quitting out of frustration and feeling let down or betrayed by Bungie.
…
Maybe we should advocate for some freedom of platform porting/publishing for all games, eh FreedomAdvocate?
Highest rated != most played or most popular.
COD MP.
Warzone.
Fortnite.
GTA Online.
Not on Linux.
Most Call of Duty games work on linux, you’re gonna have to be more specific as to which particular one of like 25 you mean by ‘COD’.
The ones that don’t, they don’t work because the devs are too lazy or incompetent (or specifically told not to by their bosses) to make an AntiCheat that isn’t a rootkit with full access to your entire PC.
I used to play GTA V Online (and RDR2, and FiveM, and RedM…) on linux all the time, literally for years… untill they just decided to ban all linux players.
IMO they owe me money for that, but oh well I guess.
…
Again, there are many AntiCheats that work on linux, and have worked on linux for years and years now.
Easy Anti Cheat and Battleeye even offer linux support to game devs. There are some games with these ACs that actually do support linux.
But many game devs/studios/publishers just don’t use this support… because then there wouldn’t be any reason to actually use Windows, and MSFT pays these studios a lot of money… or they just literally own them (Activision/Blizzard = MSFT).
Kernel Anti Cheat that only works on Windows?
Yep, that’s just a complicated way to enforce Windows exclusivity in PC games.
Go look up how many hacks and trainers you can find for one of these games you mention.
You may notice that they are all designed for, and only work on… Windows.
The idea that all linux gamers are malicious hackers is a laughable, obviously false idea… but game company execs understand the power of rabid irrational fandoms.
…
You are right that you can’t run games with rootkit anticheats on linux though, so if those heavily monetized and manipulative games with toxic playerbases are your addiction of choice, yep, sorry, linux ain’t your hookup for those.
Again, this is another game platform freedom advocacy issue, and also a personal information security advocacy issue, not a ‘something is wrong with linux’ issue.
Game companies have gotten many working anticheat systems to work with linux. The most popular third party anticheat systems also support linux.
But the industry is clever at keeping people locked into their for profit, insecure OSs that spy on their entire system.
You don’t need a graphics card. You can get mini PCs with decent gaming performance for cheap these days.
Can confirm. I wouldn’t recommend it unless you mostly play indie games, though.
The ones with capable GPUs cost as much as a PS5 Pro.
There are CPUs with quite capable iGPU, fitting in a mini-PC. All in all maybe $500.
And yeah, sure, the article mentioned that consoles are subsidized by game prices.
Go on then. Which ones.
I have a Ryzen 7 5700G in my DeskMini X300, but that one is a genrration (?) ago. Still, can play almost all games in 3440x1440 at medium settings.
In case you have seen my “string and tape” case mod to fit the cooler, that was done to support Turbo for video recoding. Noctua NH-L9a-AM4 fits nicely.
Knowing the usefulness that we’ve gotten at our house out of having them, I would probably say if I didn’t have the PS5 I would get a steam deck at this point. A refurbished one from valve when they’re on sale would be my pick. Plus, it works on my 20 year catalog of games.
By decent you meant significantly worse than console gaming performance though.
Consoles are still the king for values in gaming, even with their increasing prices.
Interesting point. Then you understand why Apple is making moves to try to be a real player in gaming.
All three of us see how gaming performance is plateauing across various hardware types to the point that a modern game can run on a wide range of hardware. With settings changes to scale across the hardware, of course.
Or are you going to be a bummer and claim it’s only mini pcs that get this benefit. Not consoles, not VR headsets, not macs, not Linux laptops.
There really is a situation going on where there is a large body of hardware in a similar place on the performance curve in a way that wasn’t always true in the past. Historically, major performance gains were made every few years. And various platforms were on very different and less interoperable hardware architectures, etc.
The Steam Deck’s success proves my point, and your point alone.
The thing is, people don’t wanna hear it. They wanna focus on the very high end. Or super high refresh rates. Or they wanna complain about library sizes.
That sounds kind of like a console, no?
Edit: I mean, if the intent is gaming and only gaming, it feels like there’s a lot of overlap. Only the PC would have less support for more freedom.