Thursday, August 14, 2025

The complex nature of video game ports/rereleases/remasters

Sometimes video game developers/publishers will take a very popular game of theirs that was released many years prior, and re-release it, often for a next-gen platform (especially if talking about consoles).

Sometimes the game will be completely identical to the original, simply ported to the newer hardware. This can be particularly relevant if a newer version of a console isn't compatible with the previous versions, allowing people who own the newer console but have never owned the older one to experience the game. (On the PC side this can be the case with very old games that are difficult if not impossible to run in modern Windows, at least not without emulation and thus probably not using a copy that has been legally purchased.

Othertimes the developers will also take the opportunity to enhance the game in some manner, improving the graphics and framerate, perhaps remaking the menus, and perhaps polishing some details (such as the controls).

Sometimes these re-releases can be absolutely awesome. Othertimes not so much, and feel more like cheap cash grabs.

Ironically, there's at least one game that's actually an example of both: The 2013 game The Last of Us.

The game was originally released for the PlayStation 3 in June of 2013, and was an exclusive for that console. Only owners of that particular console could play it.

This was, perhaps, a bit poorly timed because it was just a few months before the release of the PlayStation 4 (which happened on November of that same year).

However, the developers announced that an enhanced PlayStation 4 version would be made as well, and it was published on July of 2014, with the name "The Last of Us Remastered".

Rather than just going the lazy way of releasing the exact same game for both platforms, the PlayStation 4 version was indeed remastered with a higher resolution, better graphics, and higher framerate, and it arguably looked really good on that console.

From the players' point of view this was fantastic: Even people who never owned a PS3 but did buy the PS4 could experience the highly acclaimed game, rather than it being relegated as an exclusive to an old console (which is way too common). This is, arguably, one of the best re-releases/remasters ever made, not just in terms of the improvements but more importantly in terms of allowing gamers to experience the game who wouldn't have otherwise.

Well, quite ironically, the developers later decided to make the same game also one of the worst examples of useless or even predatory "re-releases". From one of the most fantastic examples, to one of the worst.

How? By re-releasing a somewhat "enhanced" version exclusively for the PlayStation 5 and Windows in 2022, with the name "The Last of Us Part I". The exact same game, again, with somewhat enhanced graphics for the next generation of consoles and PC.

Ok, but what makes that "one of the worst" examples of bad re-releases? The fact that it was sold at the full price of US$70, even for those who already own the PS4 version.

Mind you: "The Last of Us Remastered" for the PS4 is still perfectly playable on the PS5. It's not like PS5 owners who don't own the PS4 cannot play and thus experience it.

It was not published as some kind of "upgrade pack" for $10, like is somewhat common. It was released as its own separate game for full price, on a platform that's still completely capable of running the PS4 version. And this was, in fact, a common criticism among reviewers (both journalists and players.)

Of course this is not even the worst example, just one of the worst. There are other games that could be argued to be even worse, such as the game "Until Dawn", originally for the PS4, later re-released for the PS5 with somewhat enhanced graphics, at full price. While, once again, the original is still completely playable on the PS5.

Wednesday, August 6, 2025

"Dollars" vs "cents" notation confusion in America

There's a rather infamous recorded phone call, from maybe 20 years or so ago, where a Verizon customer calls to customer support to complain that their material advertised a certain cellphone internet connectivity plan to cost ".002 cents per kilobyte", but he was charged 0.002 dollars (ie 0.2 cents) per kilobyte.

It's quite clear that the ad meant to say "$0.002 per kilobyte", but whoever had written the ad had instead written ".002c per kilobyte" (or ".002 cents per kilobyte", I'm not sure as I have not seen the ad). (It's also evident from the context that the caller knew this but wanted to deliberately challenge Verizon for their mistake in the ad, as false advertisement is potentially illegal.)

I got reminded of this when I recently watched a video by someone who, among other things, explained how much money one can get in ad revenue from YouTube videos. He explains that his best-earning long form video has earned him "6.33 dollars per thousand views", while his best-earning shorts video has earned him "about 20 cents per thousand views". Crucially, while saying this he is writing these numbers, and what does he write? This:


In other words, he says "twenty cents", but rather than write "$0.20" or, alternatively, "20 c", he writes "0.20 c".

Obviously anybody who understands the basics of arithmetic knows that "0.20 c" is not "20 cents". After all, you can literally read what it says: "zero point two zero cents", which rather obviously is not the same thing as "twenty cents". It should be obvious to anybody that "0.20 c" is a fraction of a cent, not twenty entire cents (in particular, it's one fifth of a cent). The correct notation would be "$0.20", ie. a fraction of a dollar (one fifth).

This confusion seems surprisingly common in the United States in particular, even among people who are otherwise quite smart and should know better. But what causes this?

Lack of education, sure, but what exactly makes them believe this? Why do they believe this rather peculiar thing?

I think that we can get a hint from that phone call to Verizon. During that phone call the customer support person, when explicitly asked, very explicitly and clearly stated that ".002 cents" and ".002 dollars" mean the same thing. When later in the call the manager took over the call, he said the exact same thing.

Part of this confusion seems to indeed be the belief that, for example, "20 cents", "0.20 cents" and "0.20 dollars" all mean the same thing. What I believe is happening is that these people, for some reason, think that these are some kind of alternative notations to express the same thing. They might not be able to explain why there are so many notations to express the same thing, but I would imagine that if asked they would guess that it's just a custom, a tradition, or something like that. After all, there are many other quantities that can be expressed in different ways, yet mean the same thing.

It gives credibility to this hypothesis that, also, in that phone call to Verizon, the customer support person repeatedly says that the plan costs "point zero zero two per kilobyte", without mentioning the unit. Every time she says that, the customer explicitly asks "point zero zero two what?" and she clearly hesitates, and then says "cents". Which, of course, is the wrong answer, as it should be "dollars". But she doesn't seem to understand the difference.

What I believe happened there (and is happening with most Americans who have this same confusion) is that they indeed believe that something like "0.002", or ".002", in the context of money, is just a notation for "cents", all by itself. That if you want to write an amount of "cents", you use a dot and then the cents amount. Like, for example, if you wanted to write "20 cents", you would write a dot (perhaps preceded by a zero) and then the "20", thus "0.20" all in itself meaning "20 cents". And if you wanted to clarify that it indeed is cents, you just add the "¢" at the end.

They seem to have a fundamental misunderstanding of what the decimal point notation means and signifies, and appear to believe that it's just a special notation to indicate cents (and, thus, that "20 cents" and "0.20 cents" are just two alternative ways to write the same thing.)

Of course the critics are right that this ultimately stems from a lack of education: The education system has not taught people well enough the decimal system and how to use it. Most Americans have learned it properly, but then there are those who have fallen between the cracks and haven't got the proper education on the decimal system and arithmetic in general.

Sunday, August 3, 2025

How the Voldemort vs. Harry final fight should have actually been depicted in the movie

The movie adaptation of the final book in the Harry Potter series, Deathly Hallows: Part 2, makes the final fight between Harry and Voldemort flashy but confusing, leaving the viewers completely unclear about what exactly is happening and why, and does not convey at all the lore in the source material.

How the end to the final fight is depicted in the movie is as follows:

1) Voldemort and Harry cast some unspecified spells at each other, being pretty much a stalemate.


2) Meanwhile elsewhere Neville kills Nagini, which is the last of Voldemort's horcruxes.


3) Voldemort appears to be greatly weakened by this, so much so that his spell just fizzles out, at the same time as Harry's.

 

4) Voldemort is shown as greatly weakened, but he still casts another unspecified spell, and Harry responds with also an unspecified spell.


5) However, Voldemort's spell quickly fades out, and he looks completely powerless, looking at his Elder Wand with a puzzled or perhaps defeated look, maybe not understanding why it's not working, maybe realizing that it has abandoned him, or maybe just horrified at having just lost all of his powers. Harry's spell also fizzles out; it doesn't touch Voldemort.

6) Harry takes the opportunity to cast a new spell. He doesn't say anything but from its effect it's clear it's an expelliarmus, the disarming spell. 

 7) Voldemort gets disarmed and he looks completely powerless. The Elder Wand flies to Harry.

8) Voldemort starts disintegrating.


So what is depicted in the movie it looks like Neville destroying Nagini, Voldemort's last horcrux, completely zapped him of all power, and regardless of making a last but very powerless effort, he gets easily disarmed by Harry, and then just disintegrates, all of his power and life force having been destroyed.

In other words, it was, in fact, Neville who killed Voldemort (even if a bit indirectly) by destroying his last source of power, and Harry did nothing but just disarm him right before he disintegrated.

However, that's not at all what happened in the books.

What actually happened in the books is that, while Neville did kill Nagini, making Voldemort completely mortal, that's not what destroyed him. What destroyed him was that he cast the killing curse at Harry, who in turn immediately cast the disarming spell, and because the Elder Wand refused to destroy its own master (who via a contrived set of circumstances happened to be Harry Potter), Voldemort's killing curse rebounded back from Harry's spell and hit Voldemort himself, who died of it.

In other words, Voldemort destroyed himself with his own killing curse spell, by having it reflected back, because the Elder Wand refused to kill Harry (its master at that point).

This isn't conveyed at all in the movie.

One way this could have been depicted better and more clearly in the movie would be, for example:

When Neville destroys Nagini, Voldemort (who isn't at that very moment casting anything) looks shocked and distraught for a few seconds, then his shock turns into anger and extreme rage, and he casts the killing curse at Harry, saying it out loud (for dramatic effect the movie could show this in slow motion or in another similar manner), and Harry immediately responds with the disarming spell (also spelling it out explicitly, to make it clear which spell he is casting.)

Maybe after a second or two of the two spell effects colliding with each other, the movie clearly depicts Voldemort's spell rebounding and reflecting from Harry's spell, going back to Voldemort and very visibly hitting him. Voldemort looks at the Elder Wand in dismay, then at Harry, then his expression changes to shock when he realizes and understands, at least at some level, what just happened. He looks again at his wand and shows an expression of despair and rage, but now Harry's new disarming spell knocks it off his hand, and he starts disintegrating.

Later, in the movie's epilogue, perhaps Harry himself could give a brief explanation of what happened: That the Elder Wand refused to kill its own master, he himself, and that Voldemort's killing curse rebounded and killing its caster. 

Thursday, July 31, 2025

Matt Parker (inadvertently) proves why algorithmic optimization is important

Many programmers in various fields, oftentimes even quite experienced programmers, have this notion and attitude that optimization is not really all that crucial in most situations. So what if a program takes, say, 2 seconds to run when it could run in 1 second? In 99.99% of cases that doesn't matter. The important thing is that it works and does what it's supposed to do.

Many will often quote Donald Knuth, who in a 1974 article wrote "premature optimization is the root of all evil" (completely misunderstanding what he actually meant), and interpret that as meaning that one should actually avoid program optimization like the plague, as if it were somehow some kind of disastrous detrimental practice (not that most of them could ever explain why. It just is, because.)

Some will also reference some (in)famous cases of absolutely horrendous code in very successful programs and games, the most famous of these cases being probably the video game Papers, Please by Lucas Pope, which source code apparently is so horrendous that it would make any professional programmer puke. Yet, the game is enormously popular and goes to show (at least according to these people) that the actual quality of the code doesn't matter, what matters is that it works. Who cares if the source code looks like absolute garbage and the game might take 5% of resources when it could take 2%? If it works, don't fix it! The game is great regardless.

Well, I would like to present a counter-argument. And this counter-example comes from the youtuber and maths popularizer Matt Parker, although inadvertently so.

For one of his videos, related to the game Wordle (where you have to guess a 5-letter word by entering guesses and it showing correct letters with colors) he wanted to find out if there are any groups of 5 words of length 5 that use unique letters, in other words 25 different letters in total.

To do this he wrote some absolutely horrendous Python code that read a comprehensive word list file and tried to find a combination of five such words.

It took his program an entire month to run!

Any experienced programmer, especially those who have experience on such algorithms, should have alarm bells ringing loudly at this point, as this sounds like something that could be quite trivially done in a few seconds, probably even in under a second. After all, the problem is extremely restricted in its constraints, which are laughably small: Just five-letter words (can discard every other word), which even in the largest English dictionaries should be just a few thousands of them, and checking if a word contains only unique letters (which ought to restrict the list of words to a small fraction, as all the other 5-letter words can be discarded from the search), and finding combinations of five such words that share no common letters.

And, indeed, the actual programmers among his viewers immediately took the challenge and quite quickly wrote programs in various languages that solved the problem in a fraction of a second.

So yes, indeed: His "I don't really care about optimization as long as it does what it's supposed to do" solution took one entire month to calculate something that could be solved in a fraction of a second. Even with an extremely sloppily written program using a slow language it should only take a few seconds.

This is not a question of "who cares if the program takes 10 seconds to calculate something that could be calculated in 5 seconds?"

This is a question of "calculating in one month something that could be calculated in less than a second."

Would you be willing to wait for one entire month for something that could be calculated in less than a second? Would you be using the "as long as it works" attitude in this case?

It's the perfect example of why algorithmic optimization can actually matter, even in unimportant personal hobby projects. It's the perfect example of something where "wasting" even a couple of days thinking about and optimizing the code would have saved him a month of waiting. It would have been actually useful in practice.

(And this is, in fact, what Donald Knuth meant in his paper. His unfortunate wording is being constantly misconstrued, especially since it's constantly being quote-mined and taken out of its context.)

Thursday, July 24, 2025

"n% faster/slower" is misleading

Suppose you wanted to promote an upgrade from an RTX 3080 card to the RTX 5080. To do this you could say:

"According to the PassMark G3D score, the RTX 5080 is 44% faster than the RTX 3080."

However, suppose that instead you wanted to disincentivize the upgrade. In that case you could say:

"According to the PassMark G3D score, the RTX 3080 is only 30.6% slower than the RTX 5080."

Well, that can't be right, can it? At least one of those numbers must be incorrect, doesn't it?

Except that both sentences are correct and accurate!

And that's the ambiguity and confusion between "n% faster" and "n% slower". The problem is in the direction we are comparing, in other words, in which direction we are calculating the ratio between the two scores.

The RTX 3080 has a G3D score of 25130.

The RTX 5080 has a G3D score of 36217.

If we are comparing how much faster the latter is to the former, in other words, how much larger the latter score is than the former score, we do it like:

36217 / 25130 = 1.44118  →  44.1 % more (than 1.0)

However, if we are comparing how much slower the former is than the latter, we would do it like:

25130 / 36217 = 0.693873  →  30.6 % less (than 1.0)

So both statements are actually correct, even though they show completely different percentages.

The fundamental problem is that this kind of comparison is mixing ratios with subtractions, which leads to uneven results depending on which direction we are making the comparison. When only one of the ratios is presented (as is most usual), this can skew the perspective of how much the performance improvement actually is.

A more unambiguous and accurate comparison would be to simply give the factor. In other words:

"According to the G3D score, the speed factor between the two cards is 1.44."

However, this is a bit confusing and not very practical (and could also be incorrectly used in comparisons), so an even better comparison between the two would be to just use example frame rates. For example:

"A game that runs at 60 FPS on the RTX 3080 will run at about 86 FPS on the RTX 5080."

This doesn't suffer from the problem of which way you are doing the comparison because the numbers don't change if you do the comparison in the other direction:

"A game that runs at 86 FPS on the RTX 5080 will run at about 60 FPS on the RTX 3080." 

Sunday, July 20, 2025

What the 2008 game Spore should have been like

Like with so many other examples, the 2008 video game "Spore" generated quite some hype prior to its launch, but ended up with a rather lukewarm reception at the end. Unsurprisingly, the end product was nowhere near as exciting and as expansive as the prelaunch marketing hype made you believe.

It was supposed to be one of those "everything" games: It would simulate the development of life from unicellular organisms to galaxy-spanning civilizations, and everything in between, giving players complete freedom on how the species would evolve in between, building entire planetary and galactic civilizations!

The hype was greatly enhanced by the creature editor from the game being published as an independent application prior to launch, giving a picture of what would be possible within the game. And, indeed, by 2008 standards the creature editor was quite ahead of its time, and literally millions of players made their own creations, some of them absolutely astonishing and awesome, and likely better than even the game developers themselves could ever imagine. (For example, people were creating full-sized Tyrannosaurs Rex-like fully animated creatures, which seemed impossible when you first started using the editor, but players found tricks and ways around the limitations to create absolutely stunning results.)

Unsurprisingly, the game proper didn't live up to the hype at all.

Instead of an "everything" game, a "life simulator" encompassing all stages of life from unicellular organisms to galaxy-spanning civilizations, it was essentially just a collection of mini-games that you had to play completely linearly in sequence (with no other options!) until you got to the actual "meat" of the game, the most well-developed part of it, ie. the final "space stage", which is essentially a space civilization simulator.

The game kind of delivered the "simulate life from the very beginning" aspect, but pretty much in name only. Turns out that the game consists of five "stages":

  1. Cell stage, where you control a unicellular creature trying to feed, survive and grow.
  2. Creature stage, where you jump to a multi-cellular creature, where the creature editing possibilities start kicking in.
  3. Tribal stage, at the beginning of which the final form of the creature is finalized and locked, and which is a very small-scale "tribal warfare" simulator of sorts.
  4. Civilization stage, which now has turned into a somewhat simplistic, well, "Civilization" clone, where you manage cities and their interactions with other cities (trading, wars, etc.)
  5. And finally the actual game proper: The space stage, where you'll be spending 90+ % of your playthrough time. This is essentially a galaxy civilization simulator, and by far the most developed and most feature-rich part of the game.

The major problem with all of this is that every single stage is completely independent of every previous stage. Indeed, it literally doesn't matter what you do in any of the previous stages: It has pretty much no effect on the next stage. The only major lasting effect is a purely cosmetic one: The creature design you created at the transition between the creature and tribal stages will be the design shown during the rest of the game. And that's it. That's pretty much the only thing where one stage affects the next ones. And it is indeed 100% cosmetic (ie. it's not like your creature design affects eg. how strong or aggressive the creatures are, for example.)

The other major problem is that the first four stages are relatively short, they have to be played in linear order, and are pretty much completely inconsequential. While each stage is longer than the previous ones, the first four are still quite short (you could probably play through all four of them in an hour or two at most, even if you aren't outright speedrunning the game.)

In other words, Spore is essentially a galactic civilization simulator with some mandatory mini-games slapped at the start. In no way does it live up to the "everything" hype it was marketed as.

Ironically, the developers originally planned for the game to have even more stages than those five (including an "aquatic stage", which I assume would have been between the cell and creature stages, as well as a "city stage" which would have been between the tribal and civilization stages.)

How I think it should have been done instead:

Rather than have a mandatory and strictly linear progression between the stages (which is a horrible idea), start directly at the galactic simulator stage.

In this stage there could be hundreds or even thousands of planets with life forms at different stages (including those that were planned but not implemented). The player could "zoom in" into any of these planets and observe what's happening there and even start playing the current stage on that planet, in order to affect and speed up the advancement of that particular civilization, and create all kinds of different-looking creatures on different planets.

In fact, the player could "seed" a suitable planet by adding single-celled organisms there, which would start the "cell stage" on that planet (which the player could play or just allow to be automatically simulated on its own). If the planet isn't suitable, then he could have one of his existing civilizations terraform it.

The stages themselves should have been developed more, made longer and more engaging and fun, which would entice the player to play them again and again, on different planets.

Moreover, and more importantly: Every stage should have strong effects on the next stages on that particular planet: The choices that the player makes in one stage should have an effect on the next stages. For example certain choices could make the final civilization very intelligent and peaceful, and very good at trading. Other choices made during the different earlier stages could make the civilization very aggressive and prone to conquer other civilizations and go to war with them. And myriads of other choices. (These traits shouldn't be too random and unpredictable: The player should be allowed to make conscious choices about which direction the species goes towards. There could be, for example, trait percentages or progress bars, and every player choice will display how much it affects each trait.)

That would actually make the "mini-games" meaningful and impactful: You shape a particular civilization by how you play those mini-games! The choices you make in the earlier stages have an actual impact and strongly shape what the final civilization will be like, what it's good at, and how it behaves. 

Monday, July 14, 2025

Would The Truman Show have been better as a mystery thriller?

The 1998 film The Truman Show is considered one of the best movies ever made (or if not perhaps in the top 100, somewhere in the top quarter of all movies at least, depending on who you ask).

The film takes the approach where its setting is made clear to the viewers from the very beginning, from the very first shots. In fact, it's outright explained to the viewer in great detail, so there is absolutely no ambiguity or lack of clarity. The viewer knows exactly what the situation is and what's happening, and we are just witnessing Truman himself slowly realizing that something is not as it seems. There isn't much suspense or thriller about the movie because of this, and it's more of a comedy-drama.

The environment in the movie is also quite deliberately exaggerated (because there's no need to hide anything from the viewers, or try to mislead them in any way). The town doesn't even look like a real town and more like an artificial amusement park recreation "town". And that's, of course, completely on purpose: This isn't even supposed to look like an absolutely realistic place. (Of course Truman himself doesn't know that, which is the core point of the movie.) In other words, the movie veers more towards the amusing comedy side than on the realistic side, deliberately. The viewer's interest is drawn to how Truman himself reacts when he slowly starts suspecting and realizing that everything is not right or normal, and how the actors and show producers try to deal with it.

But one could ask: Would it have made the movie worse, the same, or perhaps even better, if it had been a mystery thriller instead? Or would that have been too cliché of a movie plot?

In other words:

  • The viewer is kept in the dark about the true reality of things, and knows exactly as much as Truman himself does. The true nature of what's happening is only revealed as Truman himself discovers it, with the big reveal only happening at the very end of the movie. Before that, it's very mysterious both to Truman and to the viewer.
  • The town is much more realistic and all the actors behave much more realistically, so as to not raise suspicion in the viewer. Nothing seems "off" or strange at first, and it just looks like your regular small American town with regular people somewhere in some island. It could still be a very bright and sunny happy town, but much more realistically depicted.
  • The hints that something may not be as it seems start much more slowly and are much subtler. (For example, no spotlight falling from the sky at the beginning of the movie. The first signs that something might be off come later and are significantly subtler than that.)
  • For the first two thirds of the movie the situation is kept very ambiguous: Is this a movie depicting a man, ie. Truman, losing his mind and becoming paranoid and crazy, or is something else going on? The movie could have been made so that it's very ambiguous if the things Truman is finding out are just the result of his paranoia, or something else. The other people in the movie are constantly "gaslighting" both Truman and the viewer in a very plausible and believable way that he's just imagining things.
  • The reveal in the end could be a complete plot twist. The movie could have been written and made in such a way that even if the viewer started suspecting that Truman isn't actually going crazy and the things he's noticing are actually a sign of something not being as it seems, it's still hard to deduce what the actual situation is, until the reveal at the very end.

Would the movie have been better this way, or would it just have been way too "cliché" of a plot? Who knows.