Friday, May 8, 2026

Lack of game optimization is getting out of control

When Nvidia introduced for the first time their new smart upscaling technology, DLSS (which uses neural networks to upscale a lower-resolution image to a higher resolution with a much better end result than a naive upscaling), the idea was simple: To allow even older hardware to run newer games at a decent framerate.

After all, this has been a bane of PC gaming since the beginning of time: Newer games require faster hardware, and thus PCs that are 10 years old usually have a hard time running those games at an acceptable resolution and framerate.

Smart upscaling gives a good solution to that problem: The 10-year-old PC can now render the game at a much lower resolution (thus increasing framerate to acceptable levels) and DLSS upscales it to a higher resolution, with the end result looking almost as good as if the game had been running at that larger resolution to begin with.

(Of course only the RTX 20 series of cards was the next generation that supports DLSS well, but the idea is, of course, that 10+ years after its publication even newer and more demanding games could still be run on that 10-year-old hardware at an acceptable resolution and framerate.)

Some years later Nvidia developed the next step in this idea: Frame generation. In other words, not only could the PC render at a lower resolution (with the end result still looking high-resolution), but for example it could just render every second frame, with the in-between frames being generated by a neural network, this effectively almost doubling the framerate (at a very small expense in latency).

So, for example, an older PC could render the game at a resolution of, say, 960x540 pixels at 30 frames per second, and DLSS would convert it to 1920x1080 pixels at 60 frames per second. Thus, even older hardware could get a full 1080p experience with newer games.

In other words, you wouldn't necessarily need to upgrade your RTX 2070 system to an RTX 6070 (or whatever) in order to run newer games acceptably.

In other words, Nvidia quite clearly originally envisioned game developers just keeping developing and optimizing their games as normal, and Nvidia's DLSS technology helping those games run even in older hardware.

However, that's not what happened.

What instead happened is that now many game developers are using DLSS as a crutch, as an excuse to not have to optimize their games so well. What's happening is that more and more games are now requiring DLSS to run even on the latest hardware at acceptable resolutions and framerates, while not looking any better, for the simple reason that the game developers are saving time and cost by not optimizing their games.

In other words, more and more developers are taking the lazy route of "why spend months optimizing our game to run at 60 FPS when we can just use DLSS?"

So what's happening is that many new games are requiring DLSS even on the newest hardware and, thus, they will not run well in older hardware (including the RTX 20 series), even though that was supposed to be the entire core idea of DLSS! Even with DLSS the games will run like crap on older hardware.

It doesn't exact help that more and more developer studios are, for some reason, jumping onto the Unreal Engine 5 bandwagon, and said engine is, for some reason, astonishingly inefficient (much more so than Unreal Engine 4, even for content that looks the same.)

More and more people are noticing how utterly inefficient Unreal Engine 5 games are, especially compared to Unreal Engine 4 games. And what's worse, the former do not look particularly better than the latter. (In fact, sometimes it's even the opposite: Many current Unreal Engine 5 games actually look visually worse than many Unreal Engine 4 games from 10 years ago.)

Why are so many game studios jumping to Unreal Engine 5, rather than Unreal Engine 4? I have no idea.

The situation has become so bad that several recently published games are actually listing DLSS and frame generation in their recommended specs. Astonishingly, some are even listing them in their minimum required specs!

And some of those games don't even visually justify that requirement. The most infamous recent example being the latest Lego Batman, which lists DLSS as a minimum requirement even though the game has the visual quality that many games over 15 years ago had.

Can you guess which game engine that Lego Batman uses? (If you guessed "Unreal Engine 5", you would be absolutely correct.)

This entire thing is getting completely out of control. We are already getting games in the Lego series that require DLSS to run properly. 

Thursday, April 30, 2026

Do not get fooled by deceptive core counts of Intel CPUs

For quite many years now Intel has been boasting about their high-core CPUs during the latest two or three CPU generations: 14 cores, 20 cores, 28 cores, sometimes even more!

For example, you may be tempted by, say, an Intel i5-14600, which has a relatively affordable price and a whopping 14 cores!

It wasn't even so long ago that having just 8 cores was high-end, now we are getting 14 cores and even more even in mid-tier CPUs at a quite affordable price.

Except that that "14 cores" is deceptive. In actual reality, for practical applications that benefit from CPU cores, like video games and CPU-intensive applications, that processor has 6 cores.

That's right. Not 14, but 6 cores.

Intel divides the cores into "performance-cores" (which there are 6 of them in this model), and "efficient-cores" (which is the remaining 8 cores). While they aren't extremely secretive about the difference between the two core types, they don't advertise it very visibly either.

In reality the difference in performance between these two core types is enormous. The "efficient-cores" are not designed for computation-heavy tasks (such as video games or rendering). They are designed to run very lightweight background tasks (that operating systems typically run dozens of, at all times.)

The "efficient-cores" are significantly slower than the full "performance-cores". According to my casual testing, those 8 "efficient-cores" combined might get you about the performance of 1 "performance-core", give or take.

So for all intents and purposes, from the perspective of computational power, the i5-14600 has about 7 cores, not 14 (and one of those 7 cores is kind of split into 8 "slow cores".) In other words, if you were to run a CPU-intensive task on all 14 cores, you may get the speed of about 7 full cores, give or take.

In summary: From the point of view of video games and CPU-heavy tasks, the i5-14600 has 6 cores, not 14. It should be thought of as a "6-core CPU" not a "14-core CPU".

The same goes for all Intel CPUs with "performance-cores" and "efficient-cores". If you care about the actual number of cores that such a CPU has, just look at the former number, not the total.

(Yes, there are advantages to the efficient-cores, as they consume less energy and ease the burden of running lightweight background tasks from the main cores, but they should not be thought of as main cores themselves, only as kind of small auxiliary cores. They will not make your games run faster.) 

Friday, April 17, 2026

Is the "gaming" label in PC peripherals just a marketing gimmick?

For quite a while now, probably 15 to 20 years, a lot of PC peripherals have been marketed with the label "gaming". Heck, even things like chairs have been marked with that label.

But does that label actually mean anything, does it make any actual difference, or is it just a meaningless marketing gimmick?

With some peripherals it may well be completely meaningless, and the device is just completely normal, no different from any non-"gaming" versions from that same manufacturers. 

With some peripherals, such as mice, SSDs, GPUs and RAM, the "gaming" might just be slapped onto higher-end products, such as high-DPI mice and faster SSDs, GPUs and RAM. So it's essentially a marketing gimmick in that it's replacing some technical term (like "high-DPI") with a term that sells better (ie. "gaming"). So, whether that label, "gaming", is actually meaningful is a bit of a matter of definition. In general, not really.

However, there is one type of peripheral where "gaming" might actually be meaningful and it actually affects how the device has been designed and manufactured, rather than it merely being either meaningless marketing drivel, or just a synonym for "higher end product".

And that's "gaming" keyboards. At least in some cases.

How so?

The vast majority of keyboards do not support every single possible combination of simultaneous key press. For example even a simple 104-key keyboard has 2104 possible keypress combinations, which is an absolutely humongous amount. Even a 64-bit value wouldn't be able to represent all of them.

Instead, most if not all keyboards have an internal circuitry design that supports only some combinations of simultaneous keys, but not nearly all of them. Typically the keys are, essentially, internally wired in a sort of grid pattern where keys on different rows and columns of the grid can be recognized simultaneously, but ones on the same rows or columns cannot. (In reality it's a bit more complicated than this, but that's the essential idea.) This saves an enormous amount of circuitry and electronic components, and thus it's much more cost-effective.

This "grid" doesn't need to follow the physical layout of the keyboard, though. The designers can route the connections however they want, thus shuffling the grid elements around to cover pretty much whatever keys they want (so, for example, one "row" of keys might consists of completely and seemingly randomly placed keys on the physical keyboard.)

Thus, the designers of the keyboard circuitry have a choice to make when it comes to which key presses are supported simultaneously and which aren't.

And here's where the "gaming" aspect of the design of a keyboard kicks in, quite literally: Usually the upper left of the alphabetical and numerical keys on a "gaming" keyboard will support significantly more simultaneous key presses than the rest of the keyboard, and this is precisely for better support in video games.

For example, I myself own a "gaming" keyboard, and it supports pressing all the ten keys QWERT plus ASDFG simultaneously without problems. However, if I press merely three keys, V, B and N, simultaneously, only two of them will register (the two that I press first).

An "office" keyboard would not need this peculiar choice of where the "densest" concentration of multiple supported key presses are located, but a "gaming" keyboard most definitely benefits from it.

This might be one of the best examples of where the "gaming" label is not a mere marketing gimmick, but actually indicates a hardware design choice for the explicit support of video games.

Friday, April 10, 2026

Explanation for the astonishingly large "minimum livable" wage in the United States

For quite a while now I have been astonished by what is considered a "minimum livable" wage in the United States. In other words, what is generally considered an absolutely minimum yearly income that allows you to survive, barely, on your own without having to rely on charity or governmental welfare.

One of the most common numbers cited for this is 50000 USD a year, ie. about 42000 €. That would be 4170 USD or 3500 € per month.

That number always makes my jaw drop, and that's because 42000 €/year in most European countries, even in the richest and most expensive-to-live ones, is a very good salary. It's a decent salary of an engineer in the tech industry, and way, way higher than most low-level jobs.

Even in the richest and most expensive European countries (eg. the Nordic Countries), a "minimum livable" wage would be about 1500 €/month, ie. 18000 €/year (about 21000 USD/year), although many people are able to live independently with salaries as low as 1000 €/month (about 1200 USD/month). It's not great, but it's livable if you don't need huge expenses.

And that's before taxes. On top of it, taxes are much lower in the US than in Europe (particularly the expensive countries), which means that in the US your net income is even larger in comparison to Europe.

So if we put all of that in USD for our American friends, that would be:

  • Generally considered "minimum livable" income:
    • US: 50000 USD/year
    • Europe: 21000 USD/year
  • "Barely survivable" extremely low income:
    • US: 30000 USD/year
    • Europe: 14000 USD/year
  • Decent income for a senior tech engineer:
    • US: 150000+ USD/year
    • Europe: 70000+ USD/year

And that is, as mentioned, before taxes. After taxes the difference is even bigger (because taxes are so much lower in the United States.)

How is this even possible? Well, I did a bit of research about this, and here are a few reasons for the disparity:

* Firstly, typical rent is significantly higher in the United States. Where the monthly rent of a small apartment in a small-to-medium size city in Europe is typically somewhere around 450 USD to 600 USD, in the United States an equivalent small apartment in an equivalent city has typically a monthly rent of 1200 USD to 1500 USD, and even higher. That's like triple. With larger apartments the difference can be even bigger.

There are many economic reasons for this disparity. Also, from the perspective of a "senior tech engineer", apartment rents skyrocket in the cities that are most populated by tech companies where those engineers work. Even a very small apartment could have a monthly rent well in excess of 2500 USD. That's like five times more than the typical small apartment in Europe (even in such cities). It's all about supply and demand.

* Secondly, health insurance is almost mandatory, unless you plan to never get sick or injured. The costs of health insurance vary a lot, but on average the absolute minimum cost is somewhere in the ballpark of 8000 to 10000 USD per year (unless the employer participates in this expense as a job benefit, which some do, but many of the smallest/cheapest companies don't.)

That's the "barely survivable" European income almost on its own (especially after taxes).

In Europe, of course, there are pretty much no expenses related to health services (or even if there are, they tend to be extremely small in comparison.) 

* Thirdly, unlike in Europe, public transport services in the United States are absolutely abysmal. In the biggest cities it can be decent and in some places you can actually survive without owning a car, but in most places owning a car is pretty much mandatory in practice, else you'll have a really hard time getting anywhere (including work.)

Cars and fuel are significantly cheaper in the United States than in Europe (especially the Nordic Countries), but they nevertheless eat a good chunk of your yearly income, easily as much as the health insurance, if not even more. In most of Europe, however, if you can't afford a car in most places it's perfectly possible to survive on public transport only (public transport services tend to be extraordinary, even in small cities and towns.) 

Wednesday, April 1, 2026

False myths: Subliminal ads inserted into movie frames

All the way since the early 1980's and even much earlier there was, at least in many parts of the world, this widespread notion that some movie producers had at least considered inserting a form of subliminal advertising into the movie reels that they were sending to movie theaters, in the form of showing an advertisement picture (eg. for a brand of soda, or whatever) during one frame of the movie, eg. every 24 frames, ie once per second.

The widely believed claim was that since the picture was only shown for one single frame, it would go too fast for anybody to consciously notice, but the subconscious would notice it, especially since it was shown repeatedly once per second during the entire movie, and thus it would create a subconscious craving for that particular product in the viewers.

This notion was so widely believed that, in fact, many countries outright passed laws banning this from being done.

The funny thing is that many people believed that claim, ie, that you wouldn't notice the advertisement picture if it was shown for one single frame, without ever having tested it. This factoid was just repeated over and over and over. I, in fact, heard the factoid from my primary school teacher, who just repeated it seriously, without criticism or doubt. In fact, many people to this day, in 2026, still believe it.

This is particularly funny because of how obviously false it is. Just try it: Create a video at a framerate of 24 frames per second (which was, and still is, the standard framerate for movie theater films), and put a static picture that has nothing to do with the rest of the video each 24th frame, and then play it at that speed: The picture flashing once per second will be extremely obvious. Completely impossible to miss. Even if you show the video to someone who has no idea what's going on will clearly see it.

Even if you don't actually replace the entire frame of the original video with the ad picture, but instead embed the ad (for example, say, the Coca Cola logo) into the original frame, like putting it in a corner with a transparent background, you will still very clearly see it flashing every second (or whatever interval you used). You'll likely even be able to read what it says.

1/24th of a second is not even nearly fast enough for you not to notice it. Neither is 1/30th of a second used in NTSC (and most online videos eg. on YouTube). Not even 1/60th of a second, if you were to create a 60-fps video, would be enough. It might be less obvious and you might be less able to read what it says, but the flashing would still be quite noticeable.

That original myth was just repeated blindly, and people just believed it, without ever raising any doubts or ever actually testing it.

Sunday, March 29, 2026

Is Jimi Hendrix really the best guitarist of all time?

If you make a search like "best guitarists of all time", there is one name that will almost always appear on the top of the list: Jimi Hendrix.

By this point it's pretty much a custom, if not outright a rule: The rest of the list is up to your opinion, but Hendrix must always be put on the top. If not, you are just an ignorant fool who doesn't know anything about anything, especially guitar music. There might be a few lists out there that dare to not put him on the top (or even the top 10), but those are extremely rare.

But the thing is: Objectively speaking, is he really the "best" or "greatest" guitarist of all time?

In terms of playing skill and technique, quite arguably and objectively no. Not by a long shot. There have been many guitarists after him that in every measurable way, objective and subjective, are better than him, in terms of skill and technique. If we expand the definition of "guitarist" wider than just electric guitar, then arguably on classical guitar there have been much better guitarists long before Hendrix (one good candidate would be Francisco Tárrega, and a good contemporary candidate would be Paco de Lucía. There are many, many others as well.)

However, when people make these lists, they are not merely talking about raw skill and technique. Well, at least not when talking about Hendrix. In his case the definition of "best" and "greatest" is expanded to mean something like "the most influential guitarist who most innovated and contributed to the art and technique of electric guitar."

Sure. Hendrix was a great innovator. While he is not the only guitarist who has invented, developed and refined new ways to play the electric guitar, he came up with a lot of the craft, a lot of the innovation, and in many cases he either did it first or greatly improved on more primitive playing techniques. He also was one of the first to use electric devices to affect the sound of the guitar, ie. apply real-time sound effects to the guitar. In fact, he had a close friend who developed a lot of the effect pedals that he used (and are still in use today in one form or another.)

But does that make him "the greatest guitarist of all time"? Well, here's where the big subjectivity of that question kicks in, as it depends heavily on how you define "greatest" or "best".

If such lists were named "the greatest innovators of electric guitar playing" then I would have zero problems in having Hendrix on the top. No question about it. However, "the greatest guitarist of all time", without further qualifiers?

The entire concept of "greatest", without further qualifiers, is so ambiguous and up to definition and opinion. 

Sunday, March 8, 2026

Final Fantasy VII Remake / Rebirth miss the mark

Note that technically speaking this post contains a spoiler of the idea of these games, but said idea is actually so vague and, in fact, so unconfirmed that it may just as well not count as a spoiler at all.

When Final Fantasy VII Remake was announced, it seemed that it would be just that: A modern remake of the original Final Fantasy VII. In other words, same story, same characters, same events, but completely newly made from the ground up using modern game technology.

And that it was... to a large extent. The game actually contains aspects that the original game did not. And these are not just cosmetic, or something like ancillary extra mini-games or side quests or the like, but embedded right into the main story.

You see, from time to time some dark ghost-like beings appear and affect the events that are happening. It is explained that these are so-called "Whispers" and it's alluded that they exist and intervene to ensure that the course of destiny is not altered by correcting any deviations from this course.

It is not, however, very clearly explained what this actually means, what the significance of the introduction of these "Whisper" entities is, and why they are there.

It is somewhat alluded, and people have speculated (and here comes the "spoiler"), that this is actually not just the original Final Fantaxy VII using modern graphics and technology. This is actually a parallel universe, or a parallel history, where things have subtly changed. And, moreover and most crucially, it is hinted that the Sephiroth of another timeline is the one trying to change the events (and the "Whispers" are trying to undo what he is doing.)

In other words, this new Final Fantasy VII Remake is, possibly, a story of "what would happen if Sephiroth from another parallel dimension or timeline tried to change the course of history of the original game?"

If that's indeed what the idea is, it would actually be an awesome one. There are so many possibilities that could be derived from it. Another parallel-universe Sephiroth is trying to change the course of the original events so that he does not lose in the end, he is trying to meddle with the timeline to change things in his favor. The Whispers are trying to undo his meddling, but he is trying to do it anyway.

Just imagine the possibilities.

And the title of the game, "Remake"? It not only alludes to this being a literal remake of the original, but it also alludes to this in-universe change in the story: Sephiroth is literally trying to "remake" the universe for his own gain. 

The problem? Or, more accurately, problems:

Firstly, this entire idea is extremely vaguely and poorly explained in the game. I myself didn't understand it until I read about it online later. While playing the game I wasn't even aware that something like this might be going on and that this is what the core idea of this remake was. And even after the explanation, it's actually not all that unambiguous and clear that this is actually what they are going for.

It would have been so much cooler if the game had depicted these changes in the timeline much more clearly and unambiguously. Something goes clearly differently from the original. Perhaps one of the character (perhaps Aerith, who is quite special, or perhaps Cloud) gets an instinct that "something is not as it should be." Something like that. Something that clearly signals that something has changed, something is different from the original, from what it should have been.

But no, neither game so far does this at all. (There are some extremely vague and unexplained things happening to Cloud, like he is confused at points or having some kind of reaction to stuff, but it's left completely unclear what it is about.) 

Secondly, both games are following the original's plot way too closely, if this is indeed the direction they are going for. Heck, people have made step-by-step comparisons of the key events and plot points of the new games and the original, and the new ones follow them really, really closely, without much change. Especially the second game seems to almost have dropped the entire idea of "Whispers" completely, and feels like just a pure remake of the middle third of the original.

If the core idea with this remake trilogy is to have Sephiroth change the timeline from the original, Square is doing a really poor job at implementing this idea. In fact, after having played the two first games in the trilogy, it almost feels like Square tried to go a bit in that direction in the first game, barely, and then almost completely abandoned the idea in the second game! I don't remember seeing a single allusion to any changes happening (if there were, they were extremely subtle and inconspicuous. Pretty much "easter eggs.") 

So far the games have been a disappointment, and that interesting core idea has been completely wasted. We'll see if the third game will change things. I'm guessing not, with perhaps the exception of the finale. (And possibly the third game might actually not kill off Aerith.)