Thursday, August 14, 2025

The complex nature of video game ports/rereleases/remasters

Sometimes video game developers/publishers will take a very popular game of theirs that was released many years prior, and re-release it, often for a next-gen platform (especially if talking about consoles).

Sometimes the game will be completely identical to the original, simply ported to the newer hardware. This can be particularly relevant if a newer version of a console isn't compatible with the previous versions, allowing people who own the newer console but have never owned the older one to experience the game. (On the PC side this can be the case with very old games that are difficult if not impossible to run in modern Windows, at least not without emulation and thus probably not using a copy that has been legally purchased.

Othertimes the developers will also take the opportunity to enhance the game in some manner, improving the graphics and framerate, perhaps remaking the menus, and perhaps polishing some details (such as the controls).

Sometimes these re-releases can be absolutely awesome. Othertimes not so much, and feel more like cheap cash grabs.

Ironically, there's at least one game that's actually an example of both: The 2013 game The Last of Us.

The game was originally released for the PlayStation 3 in June of 2013, and was an exclusive for that console. Only owners of that particular console could play it.

This was, perhaps, a bit poorly timed because it was just a few months before the release of the PlayStation 4 (which happened on November of that same year).

However, the developers announced that an enhanced PlayStation 4 version would be made as well, and it was published on July of 2014, with the name "The Last of Us Remastered".

Rather than just going the lazy way of releasing the exact same game for both platforms, the PlayStation 4 version was indeed remastered with a higher resolution, better graphics, and higher framerate, and it arguably looked really good on that console.

From the players' point of view this was fantastic: Even people who never owned a PS3 but did buy the PS4 could experience the highly acclaimed game, rather than it being relegated as an exclusive to an old console (which is way too common). This is, arguably, one of the best re-releases/remasters ever made, not just in terms of the improvements but more importantly in terms of allowing gamers to experience the game who wouldn't have otherwise.

Well, quite ironically, the developers later decided to make the same game also one of the worst examples of useless or even predatory "re-releases". From one of the most fantastic examples, to one of the worst.

How? By re-releasing a somewhat "enhanced" version exclusively for the PlayStation 5 and Windows in 2022, with the name "The Last of Us Part I". The exact same game, again, with somewhat enhanced graphics for the next generation of consoles and PC.

Ok, but what makes that "one of the worst" examples of bad re-releases? The fact that it was sold at the full price of US$70, even for those who already own the PS4 version.

Mind you: "The Last of Us Remastered" for the PS4 is still perfectly playable on the PS5. It's not like PS5 owners who don't own the PS4 cannot play and thus experience it.

It was not published as some kind of "upgrade pack" for $10, like is somewhat common. It was released as its own separate game for full price, on a platform that's still completely capable of running the PS4 version. And this was, in fact, a common criticism among reviewers (both journalists and players.)

Of course this is not even the worst example, just one of the worst. There are other games that could be argued to be even worse, such as the game "Until Dawn", originally for the PS4, later re-released for the PS5 with somewhat enhanced graphics, at full price. While, once again, the original is still completely playable on the PS5.

Wednesday, August 6, 2025

"Dollars" vs "cents" notation confusion in America

There's a rather infamous recorded phone call, from maybe 20 years or so ago, where a Verizon customer calls to customer support to complain that their material advertised a certain cellphone internet connectivity plan to cost ".002 cents per kilobyte", but he was charged 0.002 dollars (ie 0.2 cents) per kilobyte.

It's quite clear that the ad meant to say "$0.002 per kilobyte", but whoever had written the ad had instead written ".002c per kilobyte" (or ".002 cents per kilobyte", I'm not sure as I have not seen the ad). (It's also evident from the context that the caller knew this but wanted to deliberately challenge Verizon for their mistake in the ad, as false advertisement is potentially illegal.)

I got reminded of this when I recently watched a video by someone who, among other things, explained how much money one can get in ad revenue from YouTube videos. He explains that his best-earning long form video has earned him "6.33 dollars per thousand views", while his best-earning shorts video has earned him "about 20 cents per thousand views". Crucially, while saying this he is writing these numbers, and what does he write? This:


In other words, he says "twenty cents", but rather than write "$0.20" or, alternatively, "20 c", he writes "0.20 c".

Obviously anybody who understands the basics of arithmetic knows that "0.20 c" is not "20 cents". After all, you can literally read what it says: "zero point two zero cents", which rather obviously is not the same thing as "twenty cents". It should be obvious to anybody that "0.20 c" is a fraction of a cent, not twenty entire cents (in particular, it's one fifth of a cent). The correct notation would be "$0.20", ie. a fraction of a dollar (one fifth).

This confusion seems surprisingly common in the United States in particular, even among people who are otherwise quite smart and should know better. But what causes this?

Lack of education, sure, but what exactly makes them believe this? Why do they believe this rather peculiar thing?

I think that we can get a hint from that phone call to Verizon. During that phone call the customer support person, when explicitly asked, very explicitly and clearly stated that ".002 cents" and ".002 dollars" mean the same thing. When later in the call the manager took over the call, he said the exact same thing.

Part of this confusion seems to indeed be the belief that, for example, "20 cents", "0.20 cents" and "0.20 dollars" all mean the same thing. What I believe is happening is that these people, for some reason, think that these are some kind of alternative notations to express the same thing. They might not be able to explain why there are so many notations to express the same thing, but I would imagine that if asked they would guess that it's just a custom, a tradition, or something like that. After all, there are many other quantities that can be expressed in different ways, yet mean the same thing.

It gives credibility to this hypothesis that, also, in that phone call to Verizon, the customer support person repeatedly says that the plan costs "point zero zero two per kilobyte", without mentioning the unit. Every time she says that, the customer explicitly asks "point zero zero two what?" and she clearly hesitates, and then says "cents". Which, of course, is the wrong answer, as it should be "dollars". But she doesn't seem to understand the difference.

What I believe happened there (and is happening with most Americans who have this same confusion) is that they indeed believe that something like "0.002", or ".002", in the context of money, is just a notation for "cents", all by itself. That if you want to write an amount of "cents", you use a dot and then the cents amount. Like, for example, if you wanted to write "20 cents", you would write a dot (perhaps preceded by a zero) and then the "20", thus "0.20" all in itself meaning "20 cents". And if you wanted to clarify that it indeed is cents, you just add the "¢" at the end.

They seem to have a fundamental misunderstanding of what the decimal point notation means and signifies, and appear to believe that it's just a special notation to indicate cents (and, thus, that "20 cents" and "0.20 cents" are just two alternative ways to write the same thing.)

Of course the critics are right that this ultimately stems from a lack of education: The education system has not taught people well enough the decimal system and how to use it. Most Americans have learned it properly, but then there are those who have fallen between the cracks and haven't got the proper education on the decimal system and arithmetic in general.

Sunday, August 3, 2025

How the Voldemort vs. Harry final fight should have actually been depicted in the movie

The movie adaptation of the final book in the Harry Potter series, Deathly Hallows: Part 2, makes the final fight between Harry and Voldemort flashy but confusing, leaving the viewers completely unclear about what exactly is happening and why, and does not convey at all the lore in the source material.

How the end to the final fight is depicted in the movie is as follows:

1) Voldemort and Harry cast some unspecified spells at each other, being pretty much a stalemate.


2) Meanwhile elsewhere Neville kills Nagini, which is the last of Voldemort's horcruxes.


3) Voldemort appears to be greatly weakened by this, so much so that his spell just fizzles out, at the same time as Harry's.

 

4) Voldemort is shown as greatly weakened, but he still casts another unspecified spell, and Harry responds with also an unspecified spell.


5) However, Voldemort's spell quickly fades out, and he looks completely powerless, looking at his Elder Wand with a puzzled or perhaps defeated look, maybe not understanding why it's not working, maybe realizing that it has abandoned him, or maybe just horrified at having just lost all of his powers. Harry's spell also fizzles out; it doesn't touch Voldemort.

6) Harry takes the opportunity to cast a new spell. He doesn't say anything but from its effect it's clear it's an expelliarmus, the disarming spell. 

 7) Voldemort gets disarmed and he looks completely powerless. The Elder Wand flies to Harry.

8) Voldemort starts disintegrating.


So what is depicted in the movie it looks like Neville destroying Nagini, Voldemort's last horcrux, completely zapped him of all power, and regardless of making a last but very powerless effort, he gets easily disarmed by Harry, and then just disintegrates, all of his power and life force having been destroyed.

In other words, it was, in fact, Neville who killed Voldemort (even if a bit indirectly) by destroying his last source of power, and Harry did nothing but just disarm him right before he disintegrated.

However, that's not at all what happened in the books.

What actually happened in the books is that, while Neville did kill Nagini, making Voldemort completely mortal, that's not what destroyed him. What destroyed him was that he cast the killing curse at Harry, who in turn immediately cast the disarming spell, and because the Elder Wand refused to destroy its own master (who via a contrived set of circumstances happened to be Harry Potter), Voldemort's killing curse rebounded back from Harry's spell and hit Voldemort himself, who died of it.

In other words, Voldemort destroyed himself with his own killing curse spell, by having it reflected back, because the Elder Wand refused to kill Harry (its master at that point).

This isn't conveyed at all in the movie.

One way this could have been depicted better and more clearly in the movie would be, for example:

When Neville destroys Nagini, Voldemort (who isn't at that very moment casting anything) looks shocked and distraught for a few seconds, then his shock turns into anger and extreme rage, and he casts the killing curse at Harry, saying it out loud (for dramatic effect the movie could show this in slow motion or in another similar manner), and Harry immediately responds with the disarming spell (also spelling it out explicitly, to make it clear which spell he is casting.)

Maybe after a second or two of the two spell effects colliding with each other, the movie clearly depicts Voldemort's spell rebounding and reflecting from Harry's spell, going back to Voldemort and very visibly hitting him. Voldemort looks at the Elder Wand in dismay, then at Harry, then his expression changes to shock when he realizes and understands, at least at some level, what just happened. He looks again at his wand and shows an expression of despair and rage, but now Harry's new disarming spell knocks it off his hand, and he starts disintegrating.

Later, in the movie's epilogue, perhaps Harry himself could give a brief explanation of what happened: That the Elder Wand refused to kill its own master, he himself, and that Voldemort's killing curse rebounded and killing its caster. 

Thursday, July 31, 2025

Matt Parker (inadvertently) proves why algorithmic optimization is important

Many programmers in various fields, oftentimes even quite experienced programmers, have this notion and attitude that optimization is not really all that crucial in most situations. So what if a program takes, say, 2 seconds to run when it could run in 1 second? In 99.99% of cases that doesn't matter. The important thing is that it works and does what it's supposed to do.

Many will often quote Donald Knuth, who in a 1974 article wrote "premature optimization is the root of all evil" (completely misunderstanding what he actually meant), and interpret that as meaning that one should actually avoid program optimization like the plague, as if it were somehow some kind of disastrous detrimental practice (not that most of them could ever explain why. It just is, because.)

Some will also reference some (in)famous cases of absolutely horrendous code in very successful programs and games, the most famous of these cases being probably the video game Papers, Please by Lucas Pope, which source code apparently is so horrendous that it would make any professional programmer puke. Yet, the game is enormously popular and goes to show (at least according to these people) that the actual quality of the code doesn't matter, what matters is that it works. Who cares if the source code looks like absolute garbage and the game might take 5% of resources when it could take 2%? If it works, don't fix it! The game is great regardless.

Well, I would like to present a counter-argument. And this counter-example comes from the youtuber and maths popularizer Matt Parker, although inadvertently so.

For one of his videos, related to the game Wordle (where you have to guess a 5-letter word by entering guesses and it showing correct letters with colors) he wanted to find out if there are any groups of 5 words of length 5 that use unique letters, in other words 25 different letters in total.

To do this he wrote some absolutely horrendous Python code that read a comprehensive word list file and tried to find a combination of five such words.

It took his program an entire month to run!

Any experienced programmer, especially those who have experience on such algorithms, should have alarm bells ringing loudly at this point, as this sounds like something that could be quite trivially done in a few seconds, probably even in under a second. After all, the problem is extremely restricted in its constraints, which are laughably small: Just five-letter words (can discard every other word), which even in the largest English dictionaries should be just a few thousands of them, and checking if a word contains only unique letters (which ought to restrict the list of words to a small fraction, as all the other 5-letter words can be discarded from the search), and finding combinations of five such words that share no common letters.

And, indeed, the actual programmers among his viewers immediately took the challenge and quite quickly wrote programs in various languages that solved the problem in a fraction of a second.

So yes, indeed: His "I don't really care about optimization as long as it does what it's supposed to do" solution took one entire month to calculate something that could be solved in a fraction of a second. Even with an extremely sloppily written program using a slow language it should only take a few seconds.

This is not a question of "who cares if the program takes 10 seconds to calculate something that could be calculated in 5 seconds?"

This is a question of "calculating in one month something that could be calculated in less than a second."

Would you be willing to wait for one entire month for something that could be calculated in less than a second? Would you be using the "as long as it works" attitude in this case?

It's the perfect example of why algorithmic optimization can actually matter, even in unimportant personal hobby projects. It's the perfect example of something where "wasting" even a couple of days thinking about and optimizing the code would have saved him a month of waiting. It would have been actually useful in practice.

(And this is, in fact, what Donald Knuth meant in his paper. His unfortunate wording is being constantly misconstrued, especially since it's constantly being quote-mined and taken out of its context.)

Thursday, July 24, 2025

"n% faster/slower" is misleading

Suppose you wanted to promote an upgrade from an RTX 3080 card to the RTX 5080. To do this you could say:

"According to the PassMark G3D score, the RTX 5080 is 44% faster than the RTX 3080."

However, suppose that instead you wanted to disincentivize the upgrade. In that case you could say:

"According to the PassMark G3D score, the RTX 3080 is only 30.6% slower than the RTX 5080."

Well, that can't be right, can it? At least one of those numbers must be incorrect, doesn't it?

Except that both sentences are correct and accurate!

And that's the ambiguity and confusion between "n% faster" and "n% slower". The problem is in the direction we are comparing, in other words, in which direction we are calculating the ratio between the two scores.

The RTX 3080 has a G3D score of 25130.

The RTX 5080 has a G3D score of 36217.

If we are comparing how much faster the latter is to the former, in other words, how much larger the latter score is than the former score, we do it like:

36217 / 25130 = 1.44118  →  44.1 % more (than 1.0)

However, if we are comparing how much slower the former is than the latter, we would do it like:

25130 / 36217 = 0.693873  →  30.6 % less (than 1.0)

So both statements are actually correct, even though they show completely different percentages.

The fundamental problem is that this kind of comparison is mixing ratios with subtractions, which leads to uneven results depending on which direction we are making the comparison. When only one of the ratios is presented (as is most usual), this can skew the perspective of how much the performance improvement actually is.

A more unambiguous and accurate comparison would be to simply give the factor. In other words:

"According to the G3D score, the speed factor between the two cards is 1.44."

However, this is a bit confusing and not very practical (and could also be incorrectly used in comparisons), so an even better comparison between the two would be to just use example frame rates. For example:

"A game that runs at 60 FPS on the RTX 3080 will run at about 86 FPS on the RTX 5080."

This doesn't suffer from the problem of which way you are doing the comparison because the numbers don't change if you do the comparison in the other direction:

"A game that runs at 86 FPS on the RTX 5080 will run at about 60 FPS on the RTX 3080." 

Sunday, July 20, 2025

What the 2008 game Spore should have been like

Like with so many other examples, the 2008 video game "Spore" generated quite some hype prior to its launch, but ended up with a rather lukewarm reception at the end. Unsurprisingly, the end product was nowhere near as exciting and as expansive as the prelaunch marketing hype made you believe.

It was supposed to be one of those "everything" games: It would simulate the development of life from unicellular organisms to galaxy-spanning civilizations, and everything in between, giving players complete freedom on how the species would evolve in between, building entire planetary and galactic civilizations!

The hype was greatly enhanced by the creature editor from the game being published as an independent application prior to launch, giving a picture of what would be possible within the game. And, indeed, by 2008 standards the creature editor was quite ahead of its time, and literally millions of players made their own creations, some of them absolutely astonishing and awesome, and likely better than even the game developers themselves could ever imagine. (For example, people were creating full-sized Tyrannosaurs Rex-like fully animated creatures, which seemed impossible when you first started using the editor, but players found tricks and ways around the limitations to create absolutely stunning results.)

Unsurprisingly, the game proper didn't live up to the hype at all.

Instead of an "everything" game, a "life simulator" encompassing all stages of life from unicellular organisms to galaxy-spanning civilizations, it was essentially just a collection of mini-games that you had to play completely linearly in sequence (with no other options!) until you got to the actual "meat" of the game, the most well-developed part of it, ie. the final "space stage", which is essentially a space civilization simulator.

The game kind of delivered the "simulate life from the very beginning" aspect, but pretty much in name only. Turns out that the game consists of five "stages":

  1. Cell stage, where you control a unicellular creature trying to feed, survive and grow.
  2. Creature stage, where you jump to a multi-cellular creature, where the creature editing possibilities start kicking in.
  3. Tribal stage, at the beginning of which the final form of the creature is finalized and locked, and which is a very small-scale "tribal warfare" simulator of sorts.
  4. Civilization stage, which now has turned into a somewhat simplistic, well, "Civilization" clone, where you manage cities and their interactions with other cities (trading, wars, etc.)
  5. And finally the actual game proper: The space stage, where you'll be spending 90+ % of your playthrough time. This is essentially a galaxy civilization simulator, and by far the most developed and most feature-rich part of the game.

The major problem with all of this is that every single stage is completely independent of every previous stage. Indeed, it literally doesn't matter what you do in any of the previous stages: It has pretty much no effect on the next stage. The only major lasting effect is a purely cosmetic one: The creature design you created at the transition between the creature and tribal stages will be the design shown during the rest of the game. And that's it. That's pretty much the only thing where one stage affects the next ones. And it is indeed 100% cosmetic (ie. it's not like your creature design affects eg. how strong or aggressive the creatures are, for example.)

The other major problem is that the first four stages are relatively short, they have to be played in linear order, and are pretty much completely inconsequential. While each stage is longer than the previous ones, the first four are still quite short (you could probably play through all four of them in an hour or two at most, even if you aren't outright speedrunning the game.)

In other words, Spore is essentially a galactic civilization simulator with some mandatory mini-games slapped at the start. In no way does it live up to the "everything" hype it was marketed as.

Ironically, the developers originally planned for the game to have even more stages than those five (including an "aquatic stage", which I assume would have been between the cell and creature stages, as well as a "city stage" which would have been between the tribal and civilization stages.)

How I think it should have been done instead:

Rather than have a mandatory and strictly linear progression between the stages (which is a horrible idea), start directly at the galactic simulator stage.

In this stage there could be hundreds or even thousands of planets with life forms at different stages (including those that were planned but not implemented). The player could "zoom in" into any of these planets and observe what's happening there and even start playing the current stage on that planet, in order to affect and speed up the advancement of that particular civilization, and create all kinds of different-looking creatures on different planets.

In fact, the player could "seed" a suitable planet by adding single-celled organisms there, which would start the "cell stage" on that planet (which the player could play or just allow to be automatically simulated on its own). If the planet isn't suitable, then he could have one of his existing civilizations terraform it.

The stages themselves should have been developed more, made longer and more engaging and fun, which would entice the player to play them again and again, on different planets.

Moreover, and more importantly: Every stage should have strong effects on the next stages on that particular planet: The choices that the player makes in one stage should have an effect on the next stages. For example certain choices could make the final civilization very intelligent and peaceful, and very good at trading. Other choices made during the different earlier stages could make the civilization very aggressive and prone to conquer other civilizations and go to war with them. And myriads of other choices. (These traits shouldn't be too random and unpredictable: The player should be allowed to make conscious choices about which direction the species goes towards. There could be, for example, trait percentages or progress bars, and every player choice will display how much it affects each trait.)

That would actually make the "mini-games" meaningful and impactful: You shape a particular civilization by how you play those mini-games! The choices you make in the earlier stages have an actual impact and strongly shape what the final civilization will be like, what it's good at, and how it behaves. 

Monday, July 14, 2025

Would The Truman Show have been better as a mystery thriller?

The 1998 film The Truman Show is considered one of the best movies ever made (or if not perhaps in the top 100, somewhere in the top quarter of all movies at least, depending on who you ask).

The film takes the approach where its setting is made clear to the viewers from the very beginning, from the very first shots. In fact, it's outright explained to the viewer in great detail, so there is absolutely no ambiguity or lack of clarity. The viewer knows exactly what the situation is and what's happening, and we are just witnessing Truman himself slowly realizing that something is not as it seems. There isn't much suspense or thriller about the movie because of this, and it's more of a comedy-drama.

The environment in the movie is also quite deliberately exaggerated (because there's no need to hide anything from the viewers, or try to mislead them in any way). The town doesn't even look like a real town and more like an artificial amusement park recreation "town". And that's, of course, completely on purpose: This isn't even supposed to look like an absolutely realistic place. (Of course Truman himself doesn't know that, which is the core point of the movie.) In other words, the movie veers more towards the amusing comedy side than on the realistic side, deliberately. The viewer's interest is drawn to how Truman himself reacts when he slowly starts suspecting and realizing that everything is not right or normal, and how the actors and show producers try to deal with it.

But one could ask: Would it have made the movie worse, the same, or perhaps even better, if it had been a mystery thriller instead? Or would that have been too cliché of a movie plot?

In other words:

  • The viewer is kept in the dark about the true reality of things, and knows exactly as much as Truman himself does. The true nature of what's happening is only revealed as Truman himself discovers it, with the big reveal only happening at the very end of the movie. Before that, it's very mysterious both to Truman and to the viewer.
  • The town is much more realistic and all the actors behave much more realistically, so as to not raise suspicion in the viewer. Nothing seems "off" or strange at first, and it just looks like your regular small American town with regular people somewhere in some island. It could still be a very bright and sunny happy town, but much more realistically depicted.
  • The hints that something may not be as it seems start much more slowly and are much subtler. (For example, no spotlight falling from the sky at the beginning of the movie. The first signs that something might be off come later and are significantly subtler than that.)
  • For the first two thirds of the movie the situation is kept very ambiguous: Is this a movie depicting a man, ie. Truman, losing his mind and becoming paranoid and crazy, or is something else going on? The movie could have been made so that it's very ambiguous if the things Truman is finding out are just the result of his paranoia, or something else. The other people in the movie are constantly "gaslighting" both Truman and the viewer in a very plausible and believable way that he's just imagining things.
  • The reveal in the end could be a complete plot twist. The movie could have been written and made in such a way that even if the viewer started suspecting that Truman isn't actually going crazy and the things he's noticing are actually a sign of something not being as it seems, it's still hard to deduce what the actual situation is, until the reveal at the very end.

Would the movie have been better this way, or would it just have been way too "cliché" of a plot? Who knows. 

Thursday, July 3, 2025

Why I don't watch many speedruns nowadays

About twenty years ago I was a huge fan of watching speedruns. At the time, speedruns of Quake, Doom, Half-Life and Half-Life 2 were some of my all-time favorites. And, in fact, still are (particularly the Quake ones.) Back then I used to watch as many speedruns of as many games I could, as most of them were really interesting. Before YouTube this was a bit more inconvenient, but especially after YouTube was created and speedrunners and speedrunning sites started uploading there, it was a treat.

Glitch abuse was significantly rarer back then (for the simple reason that speedrunners were yet to discover most of the ones that are known today), but even then I found it fascinating. I oftentimes read with great interest the technical descriptions of how particular glitches worked and how they were executed.

The one thing I loved most about speedruns, particularly those of certain games, was the sheer awesome skill involved. Quake and Half-Life 2 (back then) were particularly stellar examples, with the runner zooming through levels at impossible speeds, doing maneuvers that felt almost impossible. It was like watching an extremely skilled craftsman perform some complicated task at an incredible speed and precision. It was absolutely fascinating.

But then, over the years, slowly but surely things started changing.

The main thing that started changing was that speedrunners became enamored with finding and using glitches to make their runs even faster. In fact, many speedrunners became outright "glitch hunters": They would meticulously research, study and test their favorite speedrunning games in order to see if they could figure out and find glitches that would help in completing the games faster. It became a source of great pride and accomplishment when they could announce yet another glitch that saved time, or a setup to make an existing glitch much easier to perform.

Thus, over the years glitch abuse in speedruns started becoming more and more common.

And the thing is, the domain for glitch hunting started being expanded more and more. Not only were they trying to find glitches that could be exploited from within the gameplay itself, using the in-game mechanics themselves, but they started looking more towards the outside for ways to glitch the game: Rather than keep the glitch abuse restricted purely within the confines of the gameplay proper, they started hunting glitches outside of it: In the game's main menu, the game's save and load mechanics, in the options menu, sometimes even completely outside the game itself (which became particularly common in console game speedrunning, ie. trying to find ways to manipulate the console hardware itself in order to affect the game.)

It was precisely Half-Life 2 speedrunning where I started to grow a dislike for these glitches for the first time. You see, back then speedrunners had found that a particular skip could be performed by quick-saving and quick-loading repeatedly. And not just a few times, but literally hundreds of times! That's right: At one point it became so bad that a Half-Life 2 speedrun could literally spend something like 10 minutes doing nothing but quick-saving and quick-loading hundreds of times in quick succession. (I believe that other techniques have since been found that have obsoleted this particular mechanic. I haven't really checked. Doesn't make much of a difference to my point, though.)

I grew a great distaste for this particular glitch execution because it just stepped so far outside the realm of playing the game itself. It was no longer showing great skill at playing the game, and playing within the confines of the gameplay proper. Instead, it was stepping outside of the gameplay proper and affecting it effectively from the outside (after all, quick-saving and quick-loading are not part of the gameplay proper, part of playing the game, advancing towards the end goal. They are a meta-feature that are not part of the gameplay itself.)

I endured this for some years, but at some point I just outright stopped watching Half-Life 2 speedruns. They had become nothing but boring glitch-fests with very little of the original charm and awe left in them anymore. Sure, the runner would still fly through stages at impossible speeds, but this would be marred by boring out-of-game glitch abuse. I just lost interest.

While Half-Life 2 was one of the first games where extremely heavy glitch-hunting and glitch-abuse, particularly of the "abusing non-gameplay meta-features" kind happened, rather obviously it was not the only one. The habit started spreading among all speedrunners of most games like wildfire.

One particularly bad example made me outright want to puke: In a speedrun of the game The Talos Principle, the runner at one point would go to the main menu, go to the game options, set the framerate limit to 30 frames per second, return to the game, perform a glitch (that could only be done with a low framerate), and afterwards go back to the options menu and set the framerate back to unlimited. This was so utterly far-removed from gameplay proper, and was just so utterly disgusting, that I just stopped watching the speedrun right then and there.

Of course there are myriads and myriads of other examples which, while not as disgusting as that, just make the speedruns boring. For example I was watching a speedrun of Dark Souls 3, and every few minutes, even several times a minute, the speedrunner would quit to the main menu and immediately load back in. Why? Because loading times did not count towards the total time of the speedrun, and doing that quit&resume would move the player to a particular location within the level.

The thing is, those particular locations were usually just a few seconds of running from where the speedrunner would quit&resume, and while quit&resuming took something like 10-15 seconds, that loading time wasn't counted towards the speedrun's time, and thus while they made the speedrun overall longer in duration, it saved a couple of seconds each time by the speedrun's official clock. And the speedrunner would do this literally hundreds of times during the run. This was so utterly boring and outright annoying to watch that, once again, I just stopped watching mid-way through.

There would be literally dozens and dozens such examples I could write about, but let me add one more, a very recent one: Very recently Legend Of Zelda: The Wind Waker speedruns have been completely overhauled by a new glitch that has become possible.

Not "discovered". Not "found a new setup that allows doing it more easily." But outright became possible. And what made it possible? The Switch 2, that's what. You see, the Switch 2 runs the game under a new internal emulator which has this curious feature that if the emulated game crashes, the user is allowed to just keep the emulated game running rather than resetting the emulator. And it so happens that one out-of-bounds glitch in Wind Waker causes the game to crash in the original GameCube, but not in the Switch 2, if you opt to allow the game to keep running after the crash. Turns out that after a minute or two the game somehow "recovers" and starts running again... with the playable character being in a completely different room, allowing for big skips.

That's right: This glitch now abuses emulation to make it possible, and it's only possible on the Switch 2, not in the original console. And this is allowed only because the emulator is an official one by Nintendo (such emulator-only glitch abuse is never allowed if using third-party emulators. But apparently it somehow makes it different if the emulator is an official one.)

I can't decide if this is less or more disgusting than the Talos Principle glitch abuse. They are closely matched.

Overall, this is just the top of the iceberg: Glitch abuse has become so utterly prevalent in speedrunning, and such a huge portion of it abuses glitches that effectively affect the gameplay "from the outside", via non-gameplay means, that it has pretty much ruined speedrunning for me. With the exception of just a handful of games (such as Quake, thankfully), long gone are the days when speedruns would just run through the game via sheer skill, without resorting to disgusting outside-of-the-game glitch manipulation.

But what about speedruns that only use "within-the-game" glitches and at no point venture out of gameplay proper? In other words, they never use saving and loading, never to go to the game's menu, never affect the gameplay proper in any way from "outside" of it? Are those A-ok in my books?

Well, for the longest time I didn't mind those glitches and those speedruns. After all, they were effectively "legit" in my book, by my own standards. What's there to complain?

Yet, in later years I have grown tired of those too. The more the speedrun glitches the game, even if it happens fully from within gameplay proper, the more boring I tend to find it. Out-of-bounds glitches in particular I find boring, particularly those that skip huge chunks of levels. Many of them just bypass what made speedruns originally so great entertainment in the first place: Seeing an extremely skillful player beat the game with astonishing precision and speed.

I like to use an analogy for this: Suppose you are going to watch a top-tier professional sports event, like a basketball match: You go there in order to witness the absolute best players in the world show their utter skill at playing the sport. You are expecting 2 hours of sheer excitement and wonder. However, suppose that one of the teams finds an obscure loophole in the rules of the game that allows them to effectively cheat a victory for themselves without even playing the game, the other team having no recourse: The first team just declared victory at the start of the game abusing the loophole, the match ends, and that's it. It's over, everybody go home.

Well, that would be an utter disappointment, and utterly boring. You didn't go there to watch a team abuse a rulebook loophole in order to snatch a quick technical victory without even playing the game. You went there to watch a game! The spectators would be outraged! You would certainly demand your money back!

Well, for me most speedruns that skip major parts of the game are the same: I don't find them interesting in any way. They skip what I was wanting to watch the speedrun for in the first place! They skip the most entertaining part! I didn't "sign up" to watch a player skip the entire game: I did it so that I could see an extremely skilled player play the game, not skip it.

Unfortunately, skips, out-of-bounds glitches and other ways to bypass gameplay proper have become ubiquitous in speedrunning (especially when it comes to 3D games), and only few games have been spared.

That is why I don't really watch much speedrunning anymore. It's just boring. I'm not interested in glitchfests anymore. I would want to see someone play the game skillfully, I'm not interested in seeing someone break the game and skip the majority of it. 

Sunday, June 29, 2025

North Korea is the weirdest country in the world, part 2

Continuing my previous blog post, here I'll deal with the absolutely worst dark side of North Korea: The concentration camps.

While the amount of information about the North Korean concentration camps is extremely limited, what we know is extremely likely at least close to the truth. This information comes from several sources, including satellite imagery, radio and other surveillance, and the testimony of the few defectors who succeeded in escaping these concentration camps and the country. While it is, of course, not 100% certain that all the information is completely accurate, the overall picture is nevertheless most probably at least close to correct (particularly because eyewitness testimony of defectors can largely be corroborated by satellite imagery.)

The North Korean government is so utterly totalitarian, controlling and paranoid, that any dissent, no matter how minor, could land you in a concentration camp. And not only you, but your entire family with you, just as punishment. (This, of course, is designed to act as an even bigger deterrent: If you misbehave it will not only be you who will be sent to the gulag: It will be your wife, your parents, your children, and probably even your siblings.) 

What makes the North Korean concentration camps special is how utterly unique they are. While concentration camps have existed for almost as long as humanity itself, the North Korean ones stand out because of how unlike anything else they are. It's probable that never before, during the entire history of humanity, have there been concentration camps like that in the world. Some might have been somewhat close, but not the same.

There are several things that make these concentration camps unique in the history of humanity:

1) Their sheer size. These are not just some encampments of a few city blocks in size, or the size of a small industrial area, like most of the camps from history. These concentration camps are absolutely enormous! The size of a big city! The fence surrounding these camps (which has been repeatedly confirmed with satellite imagery) not only covers the living quarters and buildings, but large forested areas. They usually are of the size of an entire town plus a good chunk of surrounding forest.

2) The infrastructure within these concentration camps. When one thinks of "concentration camp", the image of rows and rows of barracks immediately comes to mind, like the prison camps of Word War II, with perhaps some factories and other buildings at one side.

However, that's not what these North Korean concentration camps contain (once again corroborated by satellite imagery). Instead, they are often built like enclosed small cities in themselves: They often have a central plaza (with, rather obviously, statues or paintings of the two Dear Leaders), a central promenade, and buildings that somewhat resemble a town or small city, with pretty normal-looking roads, and surrounded by forested areas, agricultural land, and of course factories farther ahead. At a quick glance one would easily confuse them with just normal North Korean towns. Only them being completely surrounded (at quite some distance) by a fence and dozens and dozens of guard towers, and with a couple of very clear guarded entrance gates, gives away that they aren't normal towns.

3) Many inhabitants were born inside the concentration camps, and have never left in their entire lives. Moreover, they know pretty much nothing of the outside world.

Where it becomes absolutely dystopian and a stuff of dark sci-fi is that the few defectors who have successfully escaped tell that not only are the inhabitants kept in a complete information blackout, but moreover they are being told that there is no use in even trying to escape because the entirety of the outside world is a completely uninhabited wasteland, a toxic desert where they would die in a few hours, a few days at most. They are told that the fence surrounding the area is actually to keep the outside dangers out, and that it's way too dangerous for them to venture out. That the town where they live is the only safe place in the world, and is the only place where people still exist.

4) And, rather obviously, these are forced labor camps, where all people from about age 5 up are forced to work for 10 to 12 hours every day. (The propaganda being, obviously, that they need to do that work to survive, that it's necessary for their entire population to be able to live, that everybody has to do their part, and anybody who is lazy and doesn't participate will be harshly punished because that's necessary for the survival of everybody.)

There have been myriads of concentration camps during the history of humanity, but nothing compares to this. Some might come close, but not quite. The sheer physical size, the infrastructure, the buildings, the multi-generational inhabitants, the absolutely insane propaganda fed to the inhabitants, it's just astonishing. It's literally like Shyamalan's The Village meets dark dystopian sci-fi. (There was an episode of a sci-fi TV series, I think it was Stargate, that depicted a concentration camp similar to this, in other words, the inhabitants were multi-generational and had been fed the lie that the entirety of the rest of the world was a dangerous polluted uninhabitable wasteland and thus it was too dangerous to venture outside. As far as I know, this plot was heavily inspired by North Korean concentration camps.)

Friday, June 20, 2025

Why many modern triple-A games are worse than 15 years ago?

It's a trope as old as humanity: Everything was better in the past, nowadays everything sucks. Music 25 years ago was awesome, modern music is trash. Movies 25 years ago were great, nowadays they are nothing but CGI slop. And, of course, video games in the past were better than today: They might look prettier (well, at least sometimes), but they are worse in most other ways.

However, at least when it comes to video games, particularly certain long-running franchises, this is not just the nostalgia filter speaking. There are many objectively measurable ways in which many newer triple-A games are objectively worse than equivalent triple-A games of 15 or even just 10 years ago. There are, for example, myriads of compilation and commentary videos on YouTube making direct comparisons between such games.

Some examples include:

  • In a war game from the early 2010's you could shoot a building with a tank, and its walls would crumble, and if you kept shooting at it, the entire building would crumble. In a modern game in the same game franchise if you shoot a building with a tank, nothing happens to it.
  • Likewise shooting at a wooden fence with a firearm would destroy it much more realistically in many war games 15 or so years ago. 
  • Water effects tended to be much more realistic in many triple-A games 15 or so years ago than in equivalent games (even within the same game franchise) today, such as when wading through the water, shooting at the water, how transparent the water is and how it distorts the ground, etc.
  • Many other effects, such as explosions, smoke effects, the effect that projectiles had on walls and so on and so forth, often (and perhaps a bit surprisingly) looked significantly better and more realistic in the older games than today, even within the same franchise.
  • Many games had put significantly more effort in making grand-scale physics look more realistic, such as how it looks when an entire high raise building collapses, or a big element (such as a huge antenna) falls off the top of the building.
  • Overall, and somewhat ironically, game physics tended to be more polished and look more realistic 15 years ago than they look in many similar games today.
  • Many visual effects, such as lighting and reflections, looked better 10 years ago than they look today in some games that have RTX support, when RTX is turned off (in other words, they have to rely on the same rendering techniques as the games from 10 years ago.)
  • Many modern games are much heavier to run than games 10-15 years ago even when the graphical and visual quality are set to be very similar (ie. they are on pretty even and comparable ground for comparison.) In fact, many modern triple-A games look objectively worse than games 10-15 years ago when their graphical settings are tuned so that they will run at about the same framerate at the same native resolution (ie. no upscaling) in the same PC.

And all this even though the budgets of these triple-A games are much larger today than they were 15 years ago, even within the same game franchise.

This is not to say that every single video game published in 2025 looks worse and has worse visual and physics effects than the best games published in 2010-2015. However, there is a clear trend that can be seen with many triple-A games, especially when it comes to long-running franchises.

What has caused this?

There are probably myriads of reasons for this, but here are some of the possible reasons:

1) There is less talent and passion today in big game studios

Many of the game developers in big game studios in the era between about 2000-2015 were "old-timey" demo coders and hackers of the 1990's. Computer nerds who learned and coded graphically impressive demos and games out of sheer passion, and who had a great talent, knowledge and coding skills. Many of these people could code in one day something a thousand times more impressive than a modern university graduate could code in a month, and that's no exaggeration.

These "demo coders" and hackers grew up and many of them went to work in the gaming industry, for these big game studios such as EA, Ubisoft and so on. And they brought their talent and passion with them. They would, for example, spend a week implementing very detailed and accurate building crumbling mechanics and physics so that buildings could be destroyed with tank fire, just out of sheer passion and accomplishment.

This is, in fact, one of the reasons why game mechanics jumped in leaps during that era of about 2000-2015, and why many games, particularly towards the end of that era, are so impressive even by today's standards.

However, starting from about that 2010-2015 time period and forward, many of these big game studios started changing. They grew bigger and bigger, budgets grew bigger and bigger, and they became more and more what could be called "industrialized". Many of these big game studios stopped making games out of sheer passion, and instead it became just a means for making money. Deadlines became tighter, overtime and crunching became the norm, and management became less and less tolerant of time being "wasted" by these "demo coders" spending a week or two polishing some irrelevant detail in the physics engine of the game. On top of that the politics of not just western society at large but also within the video game industry was changing, and these game studios started prioritizing things other than talent and expertise.

Many of these 90's "demo coders", who were in the industry out of passion and love for their craft, got fed up and left these studios. Indeed, there has been a quite massive exodus of "old-timey" coders from many of these big studios, such as EA, Ubisoft and several others. Some of them have created their own smaller studios, and others have just moved to something else entirely, being fed up with an industry that just doesn't suit them and their passions anymore.

Thus, these big game studios have been replacing the old "demo coders" with new recruits who have less talent, less knowledge, less skills, and significantly less passion for low-level game development. People who will not spend a week polishing some particular mechanic or effect because they love doing it and have the knowledge and passion to do it. And even the few old-timer hold-outs who still cling to their jobs in these game studios are often held back and hampered by company-internal politics and the new self-entitled recruits who are not there to make great games but to boss others around.

2) Scrum may be hindering polish and innovation

Software development companies, big and small, just love Scrum, and have been adopting it over the last 15 or so years. Scrum is an "agile development" framework that is supposed to make software development more efficient and effective by having the process go through a clear set of steps and plans, where the project is divided into tasks and sub-tasks, which are clearly planned and defined, and which are put into a timeline and sort of priority list, where every programmer takes or is assigned tasks, weekly and daily meetings are held in order to figure out where everybody is at in their current tasks, and to see what to do next.

Among the hundreds and hundreds of similar software development frameworks, Scrum has become a clear favorite and is almost universally used. It has become the de facto standard, and often contrasted with the exact opposite, in other words a complete "wild west" form of "cowboy programming" where everybody does whatever they feel like with little to no supervision, planning, testing, or anything.

There are many good things about Scrum, and when well implemented (which isn't actually easy) it can improve software development. There are also bad things about Scrum which can hinder polish and innovation, particularly in large video games.

One of the major problems with it is that, when tightly implemented and used, it ties the hands of the developers and puts an extremely bright spotlight on everything they are doing: Developers are not free to do whatever they want, and have essentially no leeway on what to do (other than at some level choosing which tasks to do for the next Scrum sprint.) There are no "side projects", no "hobby projects", no "experimentation", no "let's try this to see if it works", no "let's polish this feature a bit, even though nobody asked for it." Every single task, to the most minute level, is clearly defined and assigned to every developer. "Person X does task Y, person A does task B. Period."

Sure, developers are free to suggest and even create new tasks that they come up with, like "research and implement a way for buildings to be destructible by tank fire." However, in a tight Scrum framework they usually are not free to just start doing those tasks: The tasks need to be approved in a planning meeting, and added to the next Scrum "sprint" before anybody can start doing them.

And what happens when the higher-ups see such a task? They start asking "do we really need this? Is this really necessary? There are more important and urgent things to finish first." Such "unnecessary" side tasks are never selected for the next sprint, and thus are shoved aside and forgotten. The developer who had the inspiration and passion to do that kind of task never gets to do it because his hands are tied and he is, essentially, not allowed to do it because he has to go through the process, reveal what he was planning to do, and have it approved. And, thus, "unnecessary" development often does not get approved.

And, thus, Scrum often kills polish and innovation in video games. It removes freedom from developers to engage in new ideas, in what they are passionate and talented about. Suddenly higher-ups start scrutinizing what they are doing, and denying them these "unnecessary" side projects because there are "more important" tasks to do first.

3) Technological innovation is making games worse

Many people have noticed and commented on the fact that technological innovation when it comes particularly to graphics hardware is actually, and very ironically, making games worse.

One of the most prominent examples of this is smart upscaling: This is a technique that allows a game to render at a lower resolution and then for the smart upscaler to scale up the result to the display's native resolution in a way that looks better than a naive upscaler (in other words, the picture doesn't become blurry or pixelated, but retains small details as much as possible.)

The original intent for this feature was, of course, to allow a bit weaker hardware to play games at higher resolutions with decent framerates. After all, weaker gaming hardware on the PC has always been a bane of every gamer who can't afford a top-of-the-line gaming PC: They always need to either lower the graphical quality or the resolution, or both, of new games in order to be able to play at a reasonable framerate. Well, no longer! Now they can play at their native display resolution with pretty much the highest graphical quality, even on weaker hardware! This allows even weaker gaming PCs to play games that are visually almost indistinguishable from top-of-the-line PCs. The trick is that behind the scenes the game is actually rendering at a significantly lower resolution, which is much faster, and then the smart upscaler makes it look almost like it had been rendered at native resolution in the first place.

However, this technological innovation had a huge negative side effect: Many game developers started taking it as an excuse to not to have to optimize their games like they had to do in the past. Why optimize the game to be able to hit that golden 60 frames per second at native 4k, even on high-end PCs, when you can just use the smart upscaler and make it look so? Why spend time optimizing the game when you have this wonderful tool that allows you to bypass all that?

The end result has been, of course, that new games still run like crap on weaker gaming PCs. The smart upscaling technology didn't help those one bit. With only few exceptions, not much changed. Well, except for the fact that games are now looking worse than before because the smart upscaler isn't perfect: It does a decent job at adding missing detail, but it can't beat the game being actually rendered at the display's native resolution in the first place. (Ok, to be fair, there a few situations where the smart upscaler actually produces a better-looking result than when rendering at native resolution. But this is a very hit-or-miss thing: Most things look ok, a few things look actually better, but many things look worse.)

RTX is, of course, the other technological innovation that's causing games to look worse than they did 10 years go, when RTX is turned off (ie. they have to rely on the same rendering techniques as in the past). The developers just can't be bothered with making non-RTX graphics to look as good as they did in the past. 

In other words, technological innovation has made game developers lazy, and the end result is often worse than what it was 15 years ago.

Sunday, June 15, 2025

North Korea is the weirdest country in the world

There are many YouTube videos (as well as outright documentaries broadcast on TV and other publication platforms) made by people who have visited North Korea. And all of them paint a picture of it really being the weirdest country in the world.

Many have said that, at least from the point of view of outsiders visiting the country, North Korea is like a real life version of The Truman Show. This is probably actually quite an apt description.

When foreigners visit the country, not only are they very tightly watched and chaperoned everywhere they go, and maintained in a very strict and tight tour schedule, but every single thing they encounter, no matter how tiny, is tightly scripted and acted, and the vast majority of the things they see is nothing but glorified facades and stages.

Visitors are, quite obviously, kept only in very tightly controlled (and extremely limited) parts of the country, where they can only see what the government wants them to see. They will be accommodated in one of the very few luxury hotels in the capital city, and their tour schedule will strictly take them to tightly controlled locations in order to see tightly controlled performances. And, of course, "tour guides" will be constantly chaperoning them everywhere, and extremely likely a bunch of unseen government agents (who are not only watching the foreigners, but also likely the "tour guides" themselves, to check that they perform their duties exactly as commanded and do not deviate in any way.)

These "tour guides" are always extremely happy, positive and wanting to give a good time to the visitors. So much so that it quickly starts feeling a bit uneasy. 

Needless to say, the visitors are absolutely and categorically forbidden from going anywhere on their own, without being chaperoned. (Even if in a few places they are seemingly allowed to wander around on their own, they will still be very closely monitored.) 

The country is famously and notoriously poor, always on the brink of complete economic collapse and famine, yet in these tightly guided "tours" everything will look as luxurious and grandiose as possible. Luxury hotels, luxury restaurants, grandiose monumental museums and event halls, streets that look like straight from the richest parts of Japan or South Korea, with (seemingly) dozens of restaurants, karaoke bars and so on. Obviously the visitors will only be guided to a couple of them; they can't themselves choose which one they will go into. Quite likely the rest are just empty inside, with only the outside facade making it look like there are businesses inside, when in fact there aren't.

It's like one big Hollywood studio set, spread around the capital city and a couple of closeby towns.

Not every single person that the visitors will see is a tightly controlled actor (like in The Truman Show). Many of the "normal citizens" strolling around, particularly in the middle of the capital city, may be, well, "normal" people rather than governmental agents. They may well be normal citizens living their normal lives in the buildings around. However, even their behavior is tightly controlled.

This is because only the citizens who have shown the most and strongest loyalty to the ruling party, to the leaders, those who have accomplished themselves in this regard and shown that they are good Korean citizens, are allowed to live in the center of the capital city, particularly the areas where foreigners are allowed to visit. These people have been inculcated and indoctrinated since childhood to be extremely loyal to the regime, and they know very well what will happen to them if they were to show any dissent whatsoever, no matter how minor. They know perfectly well that especially when there are foreigners nearby, they are extremely closely watched and monitored, and they know what will happen to them if they don't smile and show happiness to the foreigners, as if they were living the best lives possible.

So, in this sense, also normal regular citizens that foreigners might see are also "actors": They are tightly ordered to act happy, and even if they were to talk to the foreigners, to only say certain things. Deviating from this even slightly would be catastrophic not only to them personally but also their entire families.

Foreigners will be toured through dozens of different museums, exhibits and demonstrations. There may be musical performances, dancing, martial arts demonstrations, all kinds of things. All tightly choreographed to give the foreigners as a good picture of the country as possible. 

The actual reality of things still sometimes manages to seep through all the choreography and facades, though.

The visitors may be driven along a huge multi-lane highway to their next destination... but the highway will be virtually empty, with only the very occasional other car driving in the other direction. Something you pretty much never see in actual rich countries. Ironically, such highways will often look outright post-apocalyptic.

The visitors may be taken to huge luxury restaurants and offered luxurious five-star meals... but the restaurant will often be completely empty other than for them. Dozens and dozens of empty tables around them, with nobody else there. Perhaps one or two of the other tables with some people in them at the very most, but often not even that. It is amply clear that the meal was prepared only and solely for the foreigners, and there is no other activity in these fake restaurants.

What makes this whole thing bizarre and weird is the question of "why?"

Why is the North Korean government so insistent, so obsessed, with keeping up appearances and giving this picture to foreigners? Everybody knows that it's all fake, that it's all for show, that it's all staged, that it's nothing but theater with expensive props, just to try to give foreigners a false picture of what the country is like.

Everybody knows this. Doesn't the North Korean government know this? They must know that nobody believes them. Or are they really so utterly delusional that they honestly believe that their theatrics are actually fooling the visitors and the rest of the world?

The thing is, it's not just the theatrics, the acting, the choreography and the facades: It's how much it costs. The North Korean government has spent and is spending absolutely humongous amounts of money to keep up this facade. And for what? For a few hundred visitors per year? They are literally burning through enormous amounts of money and resources to try to convince a few hundred people per year that it's a great country. And this even though everybody knows it's all fake.

So why? Why all the theatrics? Why spend so much time, effort and money on foolishly trying to create an illusion that is fooling nobody? Are they really this delusional?

It really is the weirdest country in the world.

North Korea is not the only country that's as totalitarian, oppressive and tightly closed. Perhaps less known and less famously, but not in any way less totalitarian and closed, is Turkmenistan. Trying to visit that country is even harder than North Korea, and the government is not much better there. But the thing is, Turkmenistan doesn't even bother trying to keep up a facade to the rest of the world. (Well, except for their international airport, which is absolutely humongous and mostly empty of any passenger traffic. I suppose they are trying to keep up some kind of facade for people who land there on connecting flights. Which there are something like a hundred per day. On an airport that's designed to handle tens of millions of passengers per year.)

One theory that I have seen is that North Korea isn't actually engaging in the theatrics for the foreign visitors. That they know perfectly well that they aren't fooling anybody from the outside. Instead, they are doing it for propaganda purposes for their own citizens. In other words, all these "guided tours" of foreign visitors gives them video footage that they can then broadcast in their own country through their propaganda TV channels to the citizens. The message being: "Look how great, rich and prosperous our country is! Look at all these luxuries, look at all these huge monuments, these huge exhibitions, these huge luxury hotels and restaurants. Even foreigners are in awe at our luxuries and our culture!"

North Koreans living in the countryside in poverty and famine may well be convinced by all this propaganda that the country indeed is great, rich, grandiose and powerful, and these citizens' own situation is just very regional, unfortunate and unusual, and that if they just work hard enough, also their life situation will improve and become like it's shown on TV.

(The same governmental propaganda also paints a picture that the rest of the world, particularly South Korea, is living in utter poverty and misery, most of it being controlled by the United States, which is exploiting the rest of the world, and that North Korea is the only free and prosperous country that is not under the heel of America. That North Korea is essentially a paradise, one of the richest and most prosperous countries, thanks to them keeping America out. Most North Korean citizens believe this because they don't have any other information about the rest of the world than what is shown on their TV.)

Friday, May 2, 2025

Why do countries always use a local coordinate system rather than a global one?

Suppose you want to know the exact coordinates of a particular point in your property, to the centimeter. In many countries there are services (oftentimes governmental services) that you can commission to do this measurement and give you all the exact data of one or more such points in your land or property.

However, pretty much always what they give you are coordinates using a coordinate system that's local to your own country. Very rarely if ever do they give you the data in a global coordinate system, like the standard GCS system, or the ECEF system.

But why is that? Is it just tradition? Stubbornness? Hesitation to move to a global standard from a long-established local one?

No. The reason is a lot more practical than that. And that reason is: The continents move.

That might sound a bit of a surprising answer at first, but it indeed is literally the reason. Giving the coordinates using a global coordinate system like the GCS or ECEF is not practical because continents move. And they move surprisingly fast. This would cause the coordinates of that particular point to deviate more and more as time passes.

Indeed, if you measure the exact coordinates of a particular point on the ground today, using a global coordinate system, and you do the same measurement a year later, you'll find out that the coordinates will have drifted by 10-20 centimeters (sometimes even more, depending on where you are on Earth). When you need the coordinates to be accurate to the centimeter (or sometimes even to the millimeter), them drifting by this much is just completely impractical.

A local coordinate system for each country, however, is not fixed to the global latitudinal and longitudinal coordinates of the Earth, but to the land itself. They are defined so that certain points on the ground do not move (or move as little as possible). Or to put it other words, for all intents and purposes the local coordinate system moves with the continent. This makes sure that exact measurements of particular points on the ground will remain relatively accurate for decades to come (with any drifting being in the range of less than a millimeter per decade, or so.)

These exact measurements are often necessary for all kinds of land surveys, city planning, construction work, road planning, and so on. They give you a quick way of knowing the exact distance, to the centimeter accuracy, between two points within the country. And, more importantly, it gives you way to know the exact distance from a newly-measured point to another existing point. All kinds of construction projects use these measurements and this data all the time.

There is also another, perhaps more secondary reason to use local coordinates rather than eg. GCS:

The GCS coordinate system in particular models "sea level" as an exact mathematical oblate spheroid, with particular standardized dimensions. This is a close approximation to Earth's actual sea level, but it's not exact, because the actual sea level is not a mathematically exact oblate spheroid.

This can cause surprising results particularly in altitude measurements. There are many parts of the world where, if you were to use GCS coordinates to measure altitudes, it would seem like certain rivers are flowing uphill. Indeed, according to the GCS coordinates the end of a river may be at a "higher altitude" than the beginning.

Of course this is just a quirk of the GCS system itself, caused by its oblate spheroid to just be an approximation. A very close approximation, but still just an approximation. (Also how gravity works on an oblate spheroid plays a role in this. It's complicated.)

Many local coordinate systems also fix this problem, using more accurate "sea level" geometry for the country, and in them all rivers flow downhill. (Although there are still some local coordinate systems that have not fixed this, and still have this same problem. Several countries have tried to move to a more accurate local coordinate system in recent decades to fix this problem, among others.)

Saturday, April 12, 2025

My prediction on the upcoming Switch 2 sales

Nintendo consoles have become somewhat famous for their intermittent generational popularity, starting with the Nintendo 64 console. In other words, the success of their consoles seems to alternate between each successive generation:

The Nintendo 64 was relatively successful (although people often overestimate how successful it actually was, but it was by raw numbers quite successful), the GameCube was less so, the Wii was an absolute monster, the Wii U was an utter disappointment in terms of sales, and the Nintendo Switch was another absolute monster of a console, in terms of units sold and overall success (surpassing even the Wii, and by quite a margin, becoming the third most-sold console in history.)

This kind of alternation in success is not really seen, at least not that prominently, with the consoles of the other two major competitors, ie. Microsoft and Sony.

One of the reasons for this success alternation with Nintendo consoles is probably what could be called "generational fatigue" or sorts (referring to console generations rather than human generations).

Since pretty much the very beginning Nintendo consoles have been seen primarily as "for kids", the console that parents are most likely to buy for their kids, while the Microsoft and Sony consoles are more seen as "for hard-core gamers". While the division is not this strict in reality, it still arguably exists.

Hard-core gamers generally want the latest-and-greatest and are usually eager to get the latest console. However, parents buying a console for their kids do not think like this. Instead, they tend to think more like "we already have a Nintendo at home, why do we need another one?"

Parents don't see the value in the "latest and greatest". A console is just a console. If you have one, doesn't that suffice? And, quite likely, this happened again and again: "We already have a N64 at home, why do we need this GameCube thingie?" "We already have a Wii at home, why do we need this Wii U thing? It's just the same."

This might have caused this sort of intermittent buying pattern by parents: The next console launch comes "too soon" after the last one, so parents don't feel any incentive to buy it because they "already have one at home". However, since enough years have passed by the time of the release after that, the kids have grown up, and a new generation of kids are without a console, so the new parents buy the newest one for them, especially since the 15-or-so years old console doesn't see much use anymore.

The Nintendo Switch is currently the latest-and-greatest console, not just by Nintendo, but overall. It's currently the third-most-sold console in history, only behind the PlayStation 2 and the Nintendo DS, both obsolete (and not even all that far behind them, in terms of units sold).

The Nintendo Switch 2 will be soon launched, as of writing this blog post.

If we look at history, all the signs are there: It's extremely similar to the current Switch, and most parents who buy such consoles for their kids are likely not going to see much value in spending a huge bunch of money for a console that looks and feels so similar to the one they already have at home. "We already have a Switch at home, why do we need a new one?"

This has all the same hallmarks as the Wii U fiasco: It looks and feels way too similar to the previous console: Name's the same, looks the same, feels the same, and the marketing isn't doing enough to make it clear that this is an entirely new console.

However, there are some differences compared to the Wii vs. Wii U situation:

For starters, the problem with the Wii U was that many people thought that it was some kind of add-on peripheral for the Wii (essentially a new controller with a screen on it). They didn't actually realize that it was an entirely new independent console with better specs. Nintendo's poor marketing didn't do enough to make that clear.

The Switch 2 is, however, quite clearly a new console, not just some kind of add-on peripheral for the original Switch. I don't think anybody's confused about that. (However, there's still the problem that it looks and feels so similar that many parent's will not see any value in purchasing it if they already have the Switch. It just looks like a slightly upgraded Switch. Which, to be fair, it kind of is.)

Secondly, the Switch has been much more widely adopted by even the more "hard core" gamers. In other words, the ones that will buy the latest-and-greatest and not suffer from "generational fatigue" so much.

Particularly this second aspect is likely to make the Switch 2 more of a success than with the Wii vs. Wii U situation.

However, I highly doubt that it will reach even close to the absolutely humongous sales numbers of the Switch. That's just not going to happen.

There's also another big problem with the Switch 2: Its price. It's significantly more expensive than the original Switch, and this can be quite a turn-off for many people (both parents and for gamers buying the console for themselves.)

So, how much will Switch 2 sell?

The Nintendo Switch, as of writing this, has sold about 150 million units.

I predict that the Nintendo Switch 2 will sell during its lifetime perhaps 100 million units. Maybe a bit less. Let's say 80-100 million units.

Saturday, March 15, 2025

A different approach at convincing someone why 0.9 repeating is equal to 1

For some reason some people have an extremely difficult time accepting that 0.9 repeating is equal to 1. Not that it merely "approaches" 1, but that it's exactly equal to 1. They are just two different ways to write down the same value.

Some people are so incredibly obsessed with trying to prove that they are not equal that they will go to incredible lengths to try to do so. They will start arguing semantics, they will try to muddle the definition of an infinitely repeating decimal, and some may even attempt to invent completely new mathematics in order to somehow make the two things not equal. They are so obsessed with this that there's absolutely nothing you can tell them that would convince them otherwise. Nothing. You can try, but you will fail.

Regardless, even if it's rather moot (and will never, ever convince these people), here are two slightly different approaches at showing the equality of the two things. Instead of trying to prove it yourself, try to make them do the work.

Approach 1: Repeating decimal patterns as a fraction

It's a well-known result that every real number which decimal representation has an infinitely repeating pattern starting at some point after the decimal point is a rational number, and this is actually relatively easy to prove. And, in fact, this is a (well-known) one-to-one relationship: In other words, if the decimal representation of a number has an infinitely repeating pattern after the decimal point (not necessarily starting immediately after the decimal point, but from some point forward after that), it is a rational number, and if it doesn't have such a pattern, it's an irrational number.

This can be more succinctly (and mathematically) expressed as: A real number is rational if and only if its decimal expansion is eventually periodic.

And since such a value is a rational number, it can be written as a fraction, ie. the ratio between two integers. And, indeed, all values whose decimal representation has an infinitely repeating pattern starting at some point after the decimal point can be written as a ratio of two integers, ie. a fraction.

As an example 0.4 repeating is a rational number and can be written as 4/9.

Since this is a proven mathematical fact (and it's actually relatively easy to prove yourself), that means that 0.9 repeating is also a rational number which can be written as a fraction, ie. the ratio between two integers.

So the question is: Given that proven mathematical fact, find out what those two integers are. In other words, what is the fraction that gives 0.9 repeating.

If you want to present the argument to someone succinctly, it could be something like this:

"It's a known result that a real number is rational if and only if its decimal expansion is eventually periodic. This is easy to prove. That means that 0.9 repeating is a rational number. This also means that, as a rational number, it can be written as the ratio of two integers. Calculate what those two integers are."

Approach 2: Calculate the difference

If two values are equal, then their difference, ie. one subtracted from the other, is 0, pretty much by definition.

If two values are not equal, then their subtraction will differ from 0, again pretty much by definition.

Thus, if the real number 1 is different from the real number 0.9 repeating, calculate their difference, ie. the result of their subtraction. If they are indeed not equal, then the result has to be a real number that's different from 0. What is that real number?

(Note: There is no such a thing as "the smallest real number larger than zero". Such a thing does not exist, and it's logically and mathematically impossible to exist, especially in the set of real numbers. This is a quite famous and extremely trivially provable fact of arithmetic.)

This could be succinctly presented as:

"By axiomatic definition, if two real numbers are equal, their subtraction results in 0. Conversely, if two real numbers are not equal, their subtraction results in a non-zero real number. Calculate the subtraction of 1 and 0.9 repeating."