Monday, September 22, 2025

How the Microsoft Kinect should have been marketed

Microsoft spent, as far as I know, over a hundred million dollars in merely advertising the Kinect when they were developing it, and succeeded in creating a massive amount of hype around it. It was being pushed and showcased constantly at E3 and other events, celebrities endorsed it, big famous game development companies and publishers endorsed it, and such a huge hype formed around it that its publication became an outright massive party event at many stores in America, with huge countdown clocks (very reminiscent of New Year countdown clocks) and massive crowds cheering and celebrating outside.


Unlike many people might think, the Kinect (both versions) actually sold surprisingly many units: Microsoft reports having sold about 30 million units of the original Kinect (for the Xbox 360) alone. This is actually more than some famous consoles sold during their lifetimes (such as the original Xbox, and the Nintendo GameCube.) In fact, at the time of its publication it broke the Guinness World Record for the fastest-selling consumer electronic device (by selling 8 million units in its first 60 days.)

(This doesn't mean that Microsoft got a huge profit from it, though. It is my understanding that they were selling the device at a loss, like is so common with these things, hoping that they would recuperate the losses with game sales.) 

However, regardless of the enormous sales numbers, the device turned out to be quite a flop in the end.

Its main problem is that it greatly overpromised but underdelivered. It promised to completely revolutionize gaming and to be this absolutely marvelous form of control, with eventually a massive library of games that are extremely immersive and fun to play.

Yet, the device was significantly less accurate than shown and promised in the pre-release demos (which promised, for example, that it could track the movement of individual fingers, which it absolutely could not), it was annoying to use, it only made most controls clunkier and needlessly difficult, and the vast majority of games were significantly simpler and more boring than normal games.

While about a hundred Kinect-only games were ever published, both game developers and the users quickly lost interest. It just didn't live up to any of the hype and promises, and it was a slog to use and to develop for. People quickly moved back to the regular controller (even though according to the Kinect marketing that controller is a "barrier" and limits how you can play games. Oh, the irony.)

Microsoft tried really, really hard to keep the device alive by bundling an updated better version of it with the Xbox One. It didn't help. An abysmal 21 games were ever developed for it, and that's it. While Microsoft tried to keep it alive for a few years, finally they relented and first made it optional and then just dropped support completely. It was declared a complete failure.

One of the major problems (besides the technical limitations such as low tracking resolution) that I see in the marketing is that Microsoft approached it completely wrong: They based their marketing 100% on the idea of the Kinect replacing the regular game controller. In other words, Kinect games would be controlled with the Kinect and nothing else, that's it. Microsoft not only envisioned the Kinect to be better than the regular controller (an idea that turned out to be massively naive), but also that it would be the exclusive control system for its games.

Indeed, from what I have seen 100% of their marketing was based on that complete replacement notion. As it turns out, that didn't pan out at all.

I believe that Microsoft should have approached the Kinect marketing and development in a different way: Rather than being a complete replacement of the normal controller, it could work to enhance its functionality. In other words, while there could still be games that used the Kinect only (such as dancing games), the main use case would be to use the Kinect in addition to and at the same time as the normal controller. In other words, the Kinect would add head and hand gestures to the repertoire of controls.

The most obvious such enhancement, something that would truly add to the gaming experience, would be head tracking: For example in a flight simulator you control the plane with the controller as normal, but you can look around by turning your head. This kind of head tracking systems are available for PC. The Kinect could have brought it to the console.

Imagine such a demo in one of those E3 conferences or wherever: The person on stage is controlling an airplane or spaceship while seated using a controller, and then turns his head a bit to different sides to have the in-game view likewise turn (just like head-trackers work on PC).

Also hand gestures could be added to the repertoire of controls. For example the player could reach with his hand and turn a lever, or press some buttons, while still controlling the vehicle with the controller. Imagine that kind of demo: The demonstrator is driving a car with the controller, looking around by turning his head, and then he moves his right hand to the side, makes a motion forward, and in the game the gear lever shifts forward to select the next gear.

Now that would have been immersive. I would imagine that the spectators would have been excited about the prospect.

(I'm not saying that the Kinect would have become a success if it had been developed and marketed like this, but I think that it would have at least had better chances.) 

Sunday, September 21, 2025

Microsoft's White Elephant: The Kinect


Those who never owned an Xbox 360, or those who did, but were never really interested in, nor followed all the hype that Microsoft created around the Kinect, might find it a bit surprising, given how little impact the Kinect had on video gaming, but this device was absolutely massively advertised and pushed by Microsoft back in the day, with borderline outrageous promises and hype. And we are talking about massive promotional campaigns.

The original slogan for the Kinect was "You Are The Controller". The initial narrative, prior to the Kinect's launch (and a bit after that), was that the traditional controller was a "barrier", a very limited form of control that severely limited possibilities. According to the marketing campaigns, "Kinect will change living room entertainment forever".

Microsoft's promotional demonstrations at E3 2009, and several subsequent ones, promised absolutely incredible real-time interactivity. (Given that the actual published Kinect turned out to be enormously less accurate and powerful than advertised leads me to believe that those E3 demonstrations were fully scripted, running pre-recorded animations, rather than being real-time live captures of the movements of the performer on stage.)

Among the things that were promised (with live demonstrations, allegedly in real-time, although as said, I have my doubts) were:

- Accurate full-motion capture of the entire body, with the in-game character following the position and movement of every limb and head very accurately. In one demo this allowed full control of a character wielding a lightsaber, to fight against hordes of enemies, with accurate movements and all kinds of maneuvers (such as force pushes, etc.)

- Moreover, the detection would be so accurate as to allow very precise maneuvering, allowing very small, precise and subtle movements, such as hand gestures, to accurately control something. This included things like opening and closing one's hand, or even moving individual fingers, and manipulating in-game objects with great precision (to even the millimeter range).

- Using the traditional controller would become essentially obsolete, as everything would be usable with the Kinect alone, using gestures and voice. In fact, it was promised that many things would actually become easier with the Kinect than with the controller, especially thanks to the smart voice recognition system. (For example, not only could you make the Xbox 360 play music by saying "xbox, play some music", but you could moreover specify a particular song, an artist, or a music genre, for instance, and the system would quickly find songs matching the specified parameters.)

- Video chat with remote players would be possible, easy, and practical. (In fact, the Kinect could even follow the user's position so as to keep him or her always centered on the view.)

- The Kinect would have full facial and shape recognition, distinguishing between different users, and being able to track the position of each user, and even being able to scan objects of a certain shape, such as a scateboard or a piece of paper, in real time. In one demo, for instance, a player draws a picture on a paper, shows it to the Kinect and "hands it over" to the in-game character, and this character reaches and grabs the paper, which now has the same picture in-game (which the Kinect, at least allegedly, scanned in real-time from the paper using its camera.) The Kinect is able to see that the paper is coming closer, and thus the game character can react to it in real-time, reaching his hand and "grabbing" the paper.

Microsoft got some really big name people to promote the Kinect at some of their E3 presentations, such as Steven Spielberg himself. Several big-name game companies also announced full Kinect support in many of their future games and game franchises, promising significant improvements in gameplay and immersion.

The first launch of the Kinect was made quite a massive spectacle in itself, with tons of money poured into it. Microsoft really, really pushed the Kinect to be a complete revolution of video gaming. A completely new form of control, of playing games, that would make the old systems obsolete, antiquated and limited. (Does this sound familiar? *ahem* VR *ahem*)

Of course reality turned out to be quite a letdown, and the massive hyping campaign to be completely out of proportions. The camera image resolution as well as the framerate of the final retail version of the Kinect was but a fraction of what was promised (something that Microsoft directly admitted a bit prior to publication, citing cost and technical problems both on the Kinect side and the Xbox 360 side), affecting most of the promised features. Motion detection was much poorer than promised, facial recognition was almost non-existent and extremely flawed, as well as the promised ability to scan objects (such as pictures drawn on paper) being likewise pretty much non-existent. Accurately scanning the entire body of a user and replicating it on screen was likewise unrealistic.

I do not know if the Kinect would have worked as promised if it had the technical specifications originally planned for it (both in terms of camera resolution and capture framerate), but at the nerfed specs it was finally published it made the system almost unusable. Rather than replacing the regular controller, and being at least as fluent as, if not even more fluent than it, it was a nightmare to use. Just navigating the home screen, or the main menu of any game, using gestures, was often a pain. Very inaccurate and inconvenient. Most games were unable to accurately detect but the broadest of gestures (even though the E3 demos had promised the Kinect to be able to detect even minor gestures, such as opening and closing one's hand, or even the position of individual fingers), and this made even the simple act of navigating a menu very inaccurate and inconvenient. (In fact, many games opted to skip even trying to detect hand gestures, and implemented the simpler method of just broadly detecting where the user's hand is, and if the user keeps their hand on top of a button for long enough, the game would then activate that button. Needless to say, this isn't the most convenient and efficient or fastest way, nor the most accurate way, of navigating a menu.)

Needless to say, this was quite a big disappointment, both for users and for game developers. Neither of which got the wondrous new form of control that was promised.

Even so, Microsoft still tried to push the Kinect as the next big thing, and induced many game companies to make games for it. Some developers did indeed make Kinect games, even Kinect-only games, especially during its first few years. However, regardless of how much Microsoft pushed the platform, the total number of Kinect games is quite low. Wikipedia lists the Xbox 360 having (at least) 1233 games in total (although the real number is probably a bit higher, as Wikipedia doesn't necessarily list the most obscure games ever released for the console), and from those, only 108 are Kinect-only (with an additional 49 games having optional Kinect support).

108 games is not exactly an abysmally low number of games, but it's still pretty low, considering the success of the Xbox 360 console itself. (Also consider that a good portion of those Kinect-only games are dancing games, which isn't exactly a very popular genre.) The number of games for the Kinect is relatively low, considering how much Microsoft promoted the system.

One would think that after the disappointment that the Kinect was, as it didn't deliver almost any of its promises, and neither the users nor game developers were exactly thrilled about it, Microsoft would have, after a couple of years, just abandoned it and let it die a natural death. But no. For some reason Microsoft was obsessed with the Kinect, for many years to come. So much so that when they designed their next-generation console, the Xbox One (which was published almost exactly 3 years after the original Kinect), they made a new "improved" version of the Kinect for it. They wanted to push it so hard into the market that they actually made it a mandatory peripheral for the Xbox One. Not only would every single console come with the new Kinect bundled with it, but moreover the console wouldn't be usable at all without the Kinect! The Kinect was a mandatory peripheral to just use the console. No Kinect, and the console would refuse to even work!

Due to the massive backlash caused by this announcement, Microsoft reversed that decision just prior to launch, and allowed the console to be used without the Kinect. However, the Kinect would still be bundled with every Xbox One. You couldn't buy one without the other. (It wasn't but almost a year later that Microsoft finally started selling Xbox One's without the Kinect. At about $100 cheaper thanks to that.)

I understand what Microsoft was trying to do: The problem with the Xbox 360 Kinect was that only a fraction of users had it, and thus it wasn't very enticing for game developers to make games for it. However, now that every single user of the Xbox One had a Kinect for sure, that would certainly give incentives to game developers to support it. (After all, that's one of the core ideas of game consoles: Every console owner has the exact same hardware, and this makes the life of game developers much easier. If every console owner has a Kinect, there shouldn't be any problem in adding Kinect support to a game.)

It didn't help. Users still weren't interested in the Kinect, and in fact, the Kinect making the system about $100 more expensive hurt sales of the system quite badly. Perhaps in a vacuum it would have been ok, but the Xbox One had one ginormous adversary at the exact same time: The PS4. Which was selling like hotcakes, while the Xbox One, with its $100 higher price tag, was suffering.

When Microsoft finally started selling the console without the Kinect, its sales figures started improving significantly. (They never reached those of the PS4, but were still significantly better, making the console actually viable.)

Three years after the launch of the Xbox One, Microsoft finally accepted the reality that the Kinect was a completely dead piece of hardware that nobody was interested in. The users weren't interested in it, and game developers weren't interested in it. (There's perhaps no better indication of this than the fact that even though the Xbox One has been on the market for four years, there exist only an abysmal 21 Kinect games for it.)

A nail in the coffin of the poor device was when Microsoft published the Xbox One S, which was a streamlined and slightly more efficient version, and it had no Kinect port at all. (A Kinect can still be connected to it, but it requires a separate USB adapter. The Kinect itself isn't an USB device, instead using its own custom port.)

And, of course, the absolutely final nail in the coffin is the fact that the new Xbox One X has no Kinect support at all. Microsoft has finally effectively declared the system dead for good.

Microsoft really pushed the Kinect to be the next big thing, and probably spent countless millions of dollars in its development and marketing, and did this well beyond what was reasonable. They should have accepted it as a failure in its first couple of years, and not even try to drag it into the Xbox One. The Kinect, however, became some kind of "self-imposed" White Elephant for Microsoft. (In common terminology a "white elephant" is an overly costly possession that cannot be disposed of. In this case, Microsoft imposed this status onto themselves, for at least six years, rather than just getting rid of it.)

Thursday, September 18, 2025

The strangest urban legend: Soda can tab collecting

During the entire history of humanity there have always been so-called "urban legends". The modern times are no different and, on the contrary, urban legends have only become even more easily widespread thanks to the proliferation of newspapers, magazines, radio and TV, and quite obviously with the internet it skyrocketed: Where centuries ago it could take years for an urban legend to spread by any significant amount, and decades ago it could take years, in the era of the internet it could take mere days.

Some urban legends are completely harmless and innocuous, as they just make people believe a silly thing that nevertheless doesn't affect their lives or how they behave. Then there are the more harmful urban legends that actually do affect how people behave and what they do, sometimes even in dangerous or detrimental ways. There are also urban legends that have spawned entire cults and conspiracy theories.

Then there are the stranger urban legends that have had a huge amount of influence among people who believe it (and usually refuse to believe otherwise, no matter how much counter-evidence is presented to them.)

One of the strangest ones is the urban legend that collecting soda can tabs and sending them somewhere will help fund wheelchairs or medical procedures for elderly people. Not the cans themselves, but just the tabs.

This urban legend has existed for many decades, starting at least in the 1980's, probably even the 1970's. It might be not as widespread today anymore, but it was still going strong in the 1990's.

Incredibly, a very strange "quasi-cult" formed around this urban legend: Not only would people who believed it spend copious amounts of time detaching and collecting these soda can tabs (again, just the tabs, not the cans themselves) and sending them to somewhere where they collect them, but entire voluntary transport supply chains formed in many countries.

Indeed, there were people who volunteered their time to collect these thousands and thousands of aluminum tabs from some town, and transport them somewhere else, to someone else who got supplied by several such people, and who would then themselves further transport them to the next person up the chain. Entire supply networks formed, especially in the 80's and 90's, in many countries, all on a voluntary basis.

Many investigative journalists have tried to find out where exactly these soda can tabs end up, and it's always the same story: It's always a dead end. They interview these people and ask who they are transporting the tabs to, and it's always someone else in the supply chain. The reporters follow this trail, often consisting of even a dozen different people, a dozen different "levels" in the network, and the result is always the same: Eventually they hit a dead end, where they just can't find the next person or entity in the supply chain. And, of course, nobody of the people they interview know where the tabs are going; it's always just the next person in the chain. They don't know, nor really care, where that person then transports them to.

There doesn't appear to be any kind of conspiracy or secret organization behind it. It appears that this kind of extremely large and complex endless supply chain, which always just ends in some kind of dead end where nobody knows where the next link is, has somehow arisen spontaneously, as a kind of emergent behavior: Volunteers have just showed up to become a part of the huge network of supplies, each bringing the tabs to someone else, until the next one in the chain just can't be found and nobody really knows about it.

Any authority or any other such person who knows about this strange supply chain always tells what should be rather obvious: There is no organization or entity that collects these soda can tabs and funds wheelchairs or anything. Such a thing doesn't exist. Nobody has ever got any wheelchair or medical procedure thanks to these tabs. There is no record anywhere of such an entity, or where the tabs ultimately end. For all they know they end buried in some warehouse somewhere ("waiting" to be transported somewhere), or they end up in a landfill, or something.

Or perhaps someone along the line just sells them to a recycling station for a few dollars worth of money, and keeps that money for himself (which, in fact, is the most likely end point of the entire chain.) In fact, many suspect, the last person in the chain who allegedly "gives it to someone else further in the chain" but doesn't really know who, is likely to just bring them to a recycling center and pocketing the money. Obviously those people are very hush-hush about it. Or it may be that next unknown anonymous person in the chain.

It's also pointed out that, rather obviously, the soda can tabs, no matter how many tons of them are collected, are not even nearly valuable enough to fund any wheelchairs or expensive medical procedures. Ironically, the soda cans themselves, from which these tabs are detached, would be more valuable, but even those could perhaps barely fund one wheelchair, if enough metric tons of them were collected. But, as mentioned, there is no organization that does this. Nobody has ever gotten any wheelchair or anything else thanks to these soda tabs.

Yet, the people who keep (or at least kept in the 90's and early 2000's) detaching and transporting these soda can tabs are True Believers. No amount of evidence will convince them otherwise. In fact, when they are presented the evidence that what they are doing is useless and nobody is getting any wheelchairs, they just say that "you don't know what you are talking about, so you should shut up."

This even though they literally have no idea where the tabs are going, what this supposed organization or entity is, or who has ever got a single wheelchair or medical procedure thanks to it. They just firmly believe they exist, even though they don't know who or where.

It's an extremely strange "quasi-cult". 

Wednesday, September 10, 2025

Why ChatGPT can be so addictive

If a mere 10 years ago you had told me, or pretty much anybody who's even slightly knowledgeable of technology and computers, that in less than 10 years we will get AIs that can have perfect conversations, in perfect fluent English (and even multiple other languages), understanding everything you say and ask and responding accordingly, in the exact same way as any human would do, and even better, writing perfect essays about the subject if you wanted, in any topic whatsoever, no matter if it's a highly technical scientific topic, or psychology, or pop culture, or anything, and that it could even compose very good and reasonable poetry, song lyrics, short stories, and other similar creative output, I would have laughed at you, and most other tech-savvy people would have laughed as well.

Who would have guessed a mere 10 years ago that almost perfectly fluent conversational AI, writing perfect fluent English on any topic you could imagine, both in terms of grammar as well as the substance of the contents, was less than a decade in the future.

But here we are, with exactly that.

And the thing is, these AIs, such as ChatGPT, can be incredibly addictive. But why?

The addictiveness does not come from the AI being helpful, eg. when asking it for some information or how to do something, or where to find something. That just makes it a glorified googling tool, telling you much faster and much more easily the answer to the thing you are looking for.

That's very useful, but it's not what makes it addictive. The addictiveness comes from just having casual conversations with the AI, rather than it having an actual goal or purpose. Bug again, why? 

There are several reasons:

1) You can have a completely fluent casual conversation with the AI like you could with any intelligent conversational friend. Whatever you want to talk about, the AI can respond to it intelligently and on topic.

2) The responses are usually very interesting. Sometimes it will tell you things you already knew, but probably in more detail than a normal person would. Oftentimes it may tell you things you didn't know, and which might be interesting tidbits of information.

3) The AI is like a really smart person who knows about every possible topic and is always happy to talk about it. Be it astrophysics, or mathematics, or computer science, or programming, or physics, or psychology, or sociology, or pop culture, or history, or a classic movie or song, or the culinary arts, or even just some random topic about some random subject, the AI pretty much always knows about the topic and can give you answers and information about it. There's no topic to which it will just answer "I don't really know much about that." (There are some inappropriate topics that AIs have been explicitly programmed to refuse to answer, or the running system might even outright stop them from answering, giving you an error message, but that's understandable.)

4) Moreover, it's able to adjust its level of conversation to your own knowledge and capabilities. It won't start throwing advanced mathematical formulas at you if your question is at the level of casual conversation, but will show you advanced technical stuff if you ask it to and show that you understand the topic.

5) And here's where it starts getting so addictive: The AI is always available, is always happy to talk with you, never gets tired, is never "not in the mood". It doesn't need to sleep, it doesn't need to take breaks, it doesn't get tired (physically or mentally), it never gets the feeling that it doesn't want to talk at that moment. It never tells you "not now."

6) Likewise it never gets exasperated, it never gets tired of you asking stuff, it never considers your questions stupid. It doesn't matter if your questions and commentary is on the PhD level or at the kindergarten level, it's always happy to have a conversation and will always do so in a positive tone, without getting exasperated, without using a patronizing tone.

7) And on a similar vein, the AI never gets offended, never feels insulted, never has its feelings hurt, no matter what you say, and never gets tired of speaking to you. You could continue on the same topic, repeating similar questions again and again, and it will never get tired of answering, always politely. You can use rude and derogatory language, and even insults, and its feelings will not get hurt. It will not start resenting you, nor start considering you an unlikeable person to avoid. You could even have an argument with it, where you stubbornly refuse to accept what it's saying, again and again, for hours on end, and the AI is literally incapable of becoming frustrated, tired or offended by your responses. It will never respond in kind, and will always retain a polite tone and will always be happy to keep answering your questions and objections, no matter what.

8) Most AIs, such as ChatGPT, are also programmed to be a bit of an agreeable "yes man": If you express a personal opinion on a subjective topic, it will tend to support your opinion and tell you what its strengths are. It will very rarely outright start arguing with you and tell you that you are wrong in your opinion (unless it's something that's clearly wrong, eg. very clearly and blatantly unscientific. But even in those cases it might show some understanding of where you are coming from, if you express your opinion reasonably enough.)

9) But, of course, if you explicitly ask it to present you both sides of a position, it will write a mini-essay on that, giving supporting arguments for them.

But it's precisely that aspect of it being a tireless and agreeable "person" who is always willing and knowledgeable to have a conversation about pretty much any topic, that can make it so addictive. Friends are not always available, friends don't necessarily know every topic, friends are not always agreeable, friends can get offended or tired, but ChatGPT is incapable of doing that. It's always there, ready to have a conversation.

That can be incredibly addictive. 

Sunday, August 31, 2025

HDR is a pain in the ass

Almost thirty years ago the PNG image format was developed as a significantly better and more modern alternative for lossless image storage than any of the existing ones (particularly GIF). It did many things right, and it compiled into one all the various features of images that almost none of the other existing formats supported simultaneously (eg. GIF only supports 256 colors, which is horrendous, and only a single-color transparency. TIFF and several other formats support almost all image format features but have extremely poor compression. And so on.)

However, there was one thing that the PNG format tried to do "right" but which ended up causing a ton of problems and become a huge pain in the ass for years and years to come, particularly when support for the format became widespread, including by web browsers. And that feature was support for a gamma correction setting.

Without going into the details of what gamma correction is (as this can be easily found online), it's an absolute swamp of complications, and with the standardization and widespread support for PNG it became a nightmare for at least a decade.

In the vast, vast majority of cases, particularly when using image files in web pages, people just want unmodified pixels: Black is black, white is white, 50% gray is 50% gray, and everything in between. Period. If a pixel has RGB values (10, 100, 200), then they want those values being used as-is (eg. in a web page), not modified in any way. Particularly, if you eg. specify a background or text color of RGB (10, 100, 200) in your web page, you definitely want that same value to visually match exactly when used in an image.

When PNG became widely supported and popular in web pages, its gamma specification caused a lot of problems. That's because when a gamma value is specified in the file format, a conforming viewer software (such as a web browser) will change those pixel values accordingly, thus making them look different. And the problem is that not only do different systems use different gamma values (most famously Windows and macOS used, maybe even still use, different values for gamma), but support for gamma correction varied among browsers, some of them supporting it others not.

"What's the problem? PNG supports a 'no gamma' setting. Just leave the gamma setting out. Problem solved." Except that the PNG standard, at least back then, specifically said that if gamma wasn't specified in the file, for the viewing software to assume a particular gamma (I think it was 2.2). This, too, caused a lot of problems because some web browser were standard-conformant in this regard, while others didn't apply any default gamma value at all. This meant that even if you left the gamma setting out of the PNG file, the image would still look different in different browsers.

This was a nightmare because many web pages assumed that images will be shown unmodified and thus colors in the image will match those used elsewhere in the page. 

I think that in later decades the situation has stabilized somewhat, but it can still raise its ugly head.

I feel that HDR currently is similar to the gamma issue in PNG files in that it, too, causes a lot of problems and is a real pain in the ass to deal with.

If you buy a "4k HDR" BluRay, most (if not all) of them assume that you have a HDR-capable BluRay player and television display. In other words, the BluRay will only contain an HDR version of the video. Most of them will not have a regular non-HDR version.

What happens if your TV does not support HDR (or the support is extremely poor), and your BluRay player does not support "un-HDR'ing" the video content? What happens is that the video will be much darker and with very wrong color tones, and look completely wrong.

This is the exact situation, at least currently, with the PlayStation 5: It can act as a BluRay player, and supports 4k HDR BluRay discs, but (at least as of writing this) it does not have any support for converting HDR video material to non-HDR video (eg. by clamping the ultra-bright pixels to the maximum non-HDR brightness) and will just send the HDR material to the TV as-is. If your TV does not support HDR (or it has been turned off because it makes the picture look like ass), the video will look horrendous, with completely wrong color tones, and much darker than it should be.

(It's a complete pain in the ass. If I want to watch such a BluRay disc, I need to go and turn HDR support on, after which the video will look acceptable, even though my TV has very poor-quality HDR support. But at least the color tones will be ok. Afterwards I need to go back and turn HDR off to make games look ok once again. This is such a nuisance that I have stopped buying 4k HDR BluRays completely, after the first three.)

More recently, I got another instance of the pain-in-the-ass that's HDR, and it happened with the recently released Nintendo Switch 2.

Said console supports HDR. As it turns out, if your TV supports HDR, the console will turn HDR on, and there's no option to turn it off, anywhere. (There's one setting that claims to to turn it off for games, but it does nothing.)

I didn't realize this and wondered for some weeks why the picture looks like ass. It's a bit too bright, everything is a bit too washed.out, with the brightest pixels just looking... I don't know... glitched somehow. The thing is that I had several points of direct comparison: The console's own display, as well as the original Switch: In those, the picture looks just fine. However, on my TV the picture of the Switch 2 looks too washed-out, too low-contrast, too bright. And this even though I had adjusted the HDR brightness in the console's settings.

One day this was bothering me so much that I started browsing the TV's own menus, until I found buried deep within layers of settings one that turned HDR support completely off for that particular HDMI input.

And what do you know, the picture from the Switch 2 immediately looked perfect! Rich colors, rich saturation, good contrast, just like on the console's own screen.

The most annoying part of all of this is that, as mentioned, it's literally not possible to reach this state using the console's own system settings. If your TV tells it that it supports HDR, it will use HDR, and there's nothing you can do in the console itself to avoid that. You have to literally turn HDR support off in the settings of your TV to make the console stop using it.

The PlayStation 5 does have a system setting that turns HDR completely off from the console itself. The Switch 2 does not have such a setting. It's really annoying.

The entire HDR thing is a real pain in the ass. So far it has only caused problems without any benefits, at least to me. 

Thursday, August 14, 2025

The complex nature of video game ports/rereleases/remasters

Sometimes video game developers/publishers will take a very popular game of theirs that was released many years prior, and re-release it, often for a next-gen platform (especially if talking about consoles).

Sometimes the game will be completely identical to the original, simply ported to the newer hardware. This can be particularly relevant if a newer version of a console isn't compatible with the previous versions, allowing people who own the newer console but have never owned the older one to experience the game. (On the PC side this can be the case with very old games that are difficult if not impossible to run in modern Windows, at least not without emulation and thus probably not using a copy that has been legally purchased.

Othertimes the developers will also take the opportunity to enhance the game in some manner, improving the graphics and framerate, perhaps remaking the menus, and perhaps polishing some details (such as the controls).

Sometimes these re-releases can be absolutely awesome. Othertimes not so much, and feel more like cheap cash grabs.

Ironically, there's at least one game that's actually an example of both: The 2013 game The Last of Us.

The game was originally released for the PlayStation 3 in June of 2013, and was an exclusive for that console. Only owners of that particular console could play it.

This was, perhaps, a bit poorly timed because it was just a few months before the release of the PlayStation 4 (which happened on November of that same year).

However, the developers announced that an enhanced PlayStation 4 version would be made as well, and it was published on July of 2014, with the name "The Last of Us Remastered".

Rather than just going the lazy way of releasing the exact same game for both platforms, the PlayStation 4 version was indeed remastered with a higher resolution, better graphics, and higher framerate, and it arguably looked really good on that console.

From the players' point of view this was fantastic: Even people who never owned a PS3 but did buy the PS4 could experience the highly acclaimed game, rather than it being relegated as an exclusive to an old console (which is way too common). This is, arguably, one of the best re-releases/remasters ever made, not just in terms of the improvements but more importantly in terms of allowing gamers to experience the game who wouldn't have otherwise.

Well, quite ironically, the developers later decided to make the same game also one of the worst examples of useless or even predatory "re-releases". From one of the most fantastic examples, to one of the worst.

How? By re-releasing a somewhat "enhanced" version exclusively for the PlayStation 5 and Windows in 2022, with the name "The Last of Us Part I". The exact same game, again, with somewhat enhanced graphics for the next generation of consoles and PC.

Ok, but what makes that "one of the worst" examples of bad re-releases? The fact that it was sold at the full price of US$70, even for those who already own the PS4 version.

Mind you: "The Last of Us Remastered" for the PS4 is still perfectly playable on the PS5. It's not like PS5 owners who don't own the PS4 cannot play and thus experience it.

It was not published as some kind of "upgrade pack" for $10, like is somewhat common. It was released as its own separate game for full price, on a platform that's still completely capable of running the PS4 version. And this was, in fact, a common criticism among reviewers (both journalists and players.)

Of course this is not even the worst example, just one of the worst. There are other games that could be argued to be even worse, such as the game "Until Dawn", originally for the PS4, later re-released for the PS5 with somewhat enhanced graphics, at full price. While, once again, the original is still completely playable on the PS5.

Wednesday, August 6, 2025

"Dollars" vs "cents" notation confusion in America

There's a rather infamous recorded phone call, from maybe 20 years or so ago, where a Verizon customer calls to customer support to complain that their material advertised a certain cellphone internet connectivity plan to cost ".002 cents per kilobyte", but he was charged 0.002 dollars (ie 0.2 cents) per kilobyte.

It's quite clear that the ad meant to say "$0.002 per kilobyte", but whoever had written the ad had instead written ".002c per kilobyte" (or ".002 cents per kilobyte", I'm not sure as I have not seen the ad). (It's also evident from the context that the caller knew this but wanted to deliberately challenge Verizon for their mistake in the ad, as false advertisement is potentially illegal.)

I got reminded of this when I recently watched a video by someone who, among other things, explained how much money one can get in ad revenue from YouTube videos. He explains that his best-earning long form video has earned him "6.33 dollars per thousand views", while his best-earning shorts video has earned him "about 20 cents per thousand views". Crucially, while saying this he is writing these numbers, and what does he write? This:


In other words, he says "twenty cents", but rather than write "$0.20" or, alternatively, "20 c", he writes "0.20 c".

Obviously anybody who understands the basics of arithmetic knows that "0.20 c" is not "20 cents". After all, you can literally read what it says: "zero point two zero cents", which rather obviously is not the same thing as "twenty cents". It should be obvious to anybody that "0.20 c" is a fraction of a cent, not twenty entire cents (in particular, it's one fifth of a cent). The correct notation would be "$0.20", ie. a fraction of a dollar (one fifth).

This confusion seems surprisingly common in the United States in particular, even among people who are otherwise quite smart and should know better. But what causes this?

Lack of education, sure, but what exactly makes them believe this? Why do they believe this rather peculiar thing?

I think that we can get a hint from that phone call to Verizon. During that phone call the customer support person, when explicitly asked, very explicitly and clearly stated that ".002 cents" and ".002 dollars" mean the same thing. When later in the call the manager took over the call, he said the exact same thing.

Part of this confusion seems to indeed be the belief that, for example, "20 cents", "0.20 cents" and "0.20 dollars" all mean the same thing. What I believe is happening is that these people, for some reason, think that these are some kind of alternative notations to express the same thing. They might not be able to explain why there are so many notations to express the same thing, but I would imagine that if asked they would guess that it's just a custom, a tradition, or something like that. After all, there are many other quantities that can be expressed in different ways, yet mean the same thing.

It gives credibility to this hypothesis that, also, in that phone call to Verizon, the customer support person repeatedly says that the plan costs "point zero zero two per kilobyte", without mentioning the unit. Every time she says that, the customer explicitly asks "point zero zero two what?" and she clearly hesitates, and then says "cents". Which, of course, is the wrong answer, as it should be "dollars". But she doesn't seem to understand the difference.

What I believe happened there (and is happening with most Americans who have this same confusion) is that they indeed believe that something like "0.002", or ".002", in the context of money, is just a notation for "cents", all by itself. That if you want to write an amount of "cents", you use a dot and then the cents amount. Like, for example, if you wanted to write "20 cents", you would write a dot (perhaps preceded by a zero) and then the "20", thus "0.20" all in itself meaning "20 cents". And if you wanted to clarify that it indeed is cents, you just add the "¢" at the end.

They seem to have a fundamental misunderstanding of what the decimal point notation means and signifies, and appear to believe that it's just a special notation to indicate cents (and, thus, that "20 cents" and "0.20 cents" are just two alternative ways to write the same thing.)

Of course the critics are right that this ultimately stems from a lack of education: The education system has not taught people well enough the decimal system and how to use it. Most Americans have learned it properly, but then there are those who have fallen between the cracks and haven't got the proper education on the decimal system and arithmetic in general.

Sunday, August 3, 2025

How the Voldemort vs. Harry final fight should have actually been depicted in the movie

The movie adaptation of the final book in the Harry Potter series, Deathly Hallows: Part 2, makes the final fight between Harry and Voldemort flashy but confusing, leaving the viewers completely unclear about what exactly is happening and why, and does not convey at all the lore in the source material.

How the end to the final fight is depicted in the movie is as follows:

1) Voldemort and Harry cast some unspecified spells at each other, being pretty much a stalemate.


2) Meanwhile elsewhere Neville kills Nagini, which is the last of Voldemort's horcruxes.


3) Voldemort appears to be greatly weakened by this, so much so that his spell just fizzles out, at the same time as Harry's.

 

4) Voldemort is shown as greatly weakened, but he still casts another unspecified spell, and Harry responds with also an unspecified spell.


5) However, Voldemort's spell quickly fades out, and he looks completely powerless, looking at his Elder Wand with a puzzled or perhaps defeated look, maybe not understanding why it's not working, maybe realizing that it has abandoned him, or maybe just horrified at having just lost all of his powers. Harry's spell also fizzles out; it doesn't touch Voldemort.

6) Harry takes the opportunity to cast a new spell. He doesn't say anything but from its effect it's clear it's an expelliarmus, the disarming spell. 

 7) Voldemort gets disarmed and he looks completely powerless. The Elder Wand flies to Harry.

8) Voldemort starts disintegrating.


So what is depicted in the movie it looks like Neville destroying Nagini, Voldemort's last horcrux, completely zapped him of all power, and regardless of making a last but very powerless effort, he gets easily disarmed by Harry, and then just disintegrates, all of his power and life force having been destroyed.

In other words, it was, in fact, Neville who killed Voldemort (even if a bit indirectly) by destroying his last source of power, and Harry did nothing but just disarm him right before he disintegrated.

However, that's not at all what happened in the books.

What actually happened in the books is that, while Neville did kill Nagini, making Voldemort completely mortal, that's not what destroyed him. What destroyed him was that he cast the killing curse at Harry, who in turn immediately cast the disarming spell, and because the Elder Wand refused to destroy its own master (who via a contrived set of circumstances happened to be Harry Potter), Voldemort's killing curse rebounded back from Harry's spell and hit Voldemort himself, who died of it.

In other words, Voldemort destroyed himself with his own killing curse spell, by having it reflected back, because the Elder Wand refused to kill Harry (its master at that point).

This isn't conveyed at all in the movie.

One way this could have been depicted better and more clearly in the movie would be, for example:

When Neville destroys Nagini, Voldemort (who isn't at that very moment casting anything) looks shocked and distraught for a few seconds, then his shock turns into anger and extreme rage, and he casts the killing curse at Harry, saying it out loud (for dramatic effect the movie could show this in slow motion or in another similar manner), and Harry immediately responds with the disarming spell (also spelling it out explicitly, to make it clear which spell he is casting.)

Maybe after a second or two of the two spell effects colliding with each other, the movie clearly depicts Voldemort's spell rebounding and reflecting from Harry's spell, going back to Voldemort and very visibly hitting him. Voldemort looks at the Elder Wand in dismay, then at Harry, then his expression changes to shock when he realizes and understands, at least at some level, what just happened. He looks again at his wand and shows an expression of despair and rage, but now Harry's new disarming spell knocks it off his hand, and he starts disintegrating.

Later, in the movie's epilogue, perhaps Harry himself could give a brief explanation of what happened: That the Elder Wand refused to kill its own master, he himself, and that Voldemort's killing curse rebounded and killing its caster. 

Thursday, July 31, 2025

Matt Parker (inadvertently) proves why algorithmic optimization is important

Many programmers in various fields, oftentimes even quite experienced programmers, have this notion and attitude that optimization is not really all that crucial in most situations. So what if a program takes, say, 2 seconds to run when it could run in 1 second? In 99.99% of cases that doesn't matter. The important thing is that it works and does what it's supposed to do.

Many will often quote Donald Knuth, who in a 1974 article wrote "premature optimization is the root of all evil" (completely misunderstanding what he actually meant), and interpret that as meaning that one should actually avoid program optimization like the plague, as if it were somehow some kind of disastrous detrimental practice (not that most of them could ever explain why. It just is, because.)

Some will also reference some (in)famous cases of absolutely horrendous code in very successful programs and games, the most famous of these cases being probably the video game Papers, Please by Lucas Pope, which source code apparently is so horrendous that it would make any professional programmer puke. Yet, the game is enormously popular and goes to show (at least according to these people) that the actual quality of the code doesn't matter, what matters is that it works. Who cares if the source code looks like absolute garbage and the game might take 5% of resources when it could take 2%? If it works, don't fix it! The game is great regardless.

Well, I would like to present a counter-argument. And this counter-example comes from the youtuber and maths popularizer Matt Parker, although inadvertently so.

For one of his videos, related to the game Wordle (where you have to guess a 5-letter word by entering guesses and it showing correct letters with colors) he wanted to find out if there are any groups of 5 words of length 5 that use unique letters, in other words 25 different letters in total.

To do this he wrote some absolutely horrendous Python code that read a comprehensive word list file and tried to find a combination of five such words.

It took his program an entire month to run!

Any experienced programmer, especially those who have experience on such algorithms, should have alarm bells ringing loudly at this point, as this sounds like something that could be quite trivially done in a few seconds, probably even in under a second. After all, the problem is extremely restricted in its constraints, which are laughably small: Just five-letter words (can discard every other word), which even in the largest English dictionaries should be just a few thousands of them, and checking if a word contains only unique letters (which ought to restrict the list of words to a small fraction, as all the other 5-letter words can be discarded from the search), and finding combinations of five such words that share no common letters.

And, indeed, the actual programmers among his viewers immediately took the challenge and quite quickly wrote programs in various languages that solved the problem in a fraction of a second.

So yes, indeed: His "I don't really care about optimization as long as it does what it's supposed to do" solution took one entire month to calculate something that could be solved in a fraction of a second. Even with an extremely sloppily written program using a slow language it should only take a few seconds.

This is not a question of "who cares if the program takes 10 seconds to calculate something that could be calculated in 5 seconds?"

This is a question of "calculating in one month something that could be calculated in less than a second."

Would you be willing to wait for one entire month for something that could be calculated in less than a second? Would you be using the "as long as it works" attitude in this case?

It's the perfect example of why algorithmic optimization can actually matter, even in unimportant personal hobby projects. It's the perfect example of something where "wasting" even a couple of days thinking about and optimizing the code would have saved him a month of waiting. It would have been actually useful in practice.

(And this is, in fact, what Donald Knuth meant in his paper. His unfortunate wording is being constantly misconstrued, especially since it's constantly being quote-mined and taken out of its context.)

Thursday, July 24, 2025

"n% faster/slower" is misleading

Suppose you wanted to promote an upgrade from an RTX 3080 card to the RTX 5080. To do this you could say:

"According to the PassMark G3D score, the RTX 5080 is 44% faster than the RTX 3080."

However, suppose that instead you wanted to disincentivize the upgrade. In that case you could say:

"According to the PassMark G3D score, the RTX 3080 is only 30.6% slower than the RTX 5080."

Well, that can't be right, can it? At least one of those numbers must be incorrect, doesn't it?

Except that both sentences are correct and accurate!

And that's the ambiguity and confusion between "n% faster" and "n% slower". The problem is in the direction we are comparing, in other words, in which direction we are calculating the ratio between the two scores.

The RTX 3080 has a G3D score of 25130.

The RTX 5080 has a G3D score of 36217.

If we are comparing how much faster the latter is to the former, in other words, how much larger the latter score is than the former score, we do it like:

36217 / 25130 = 1.44118  →  44.1 % more (than 1.0)

However, if we are comparing how much slower the former is than the latter, we would do it like:

25130 / 36217 = 0.693873  →  30.6 % less (than 1.0)

So both statements are actually correct, even though they show completely different percentages.

The fundamental problem is that this kind of comparison is mixing ratios with subtractions, which leads to uneven results depending on which direction we are making the comparison. When only one of the ratios is presented (as is most usual), this can skew the perspective of how much the performance improvement actually is.

A more unambiguous and accurate comparison would be to simply give the factor. In other words:

"According to the G3D score, the speed factor between the two cards is 1.44."

However, this is a bit confusing and not very practical (and could also be incorrectly used in comparisons), so an even better comparison between the two would be to just use example frame rates. For example:

"A game that runs at 60 FPS on the RTX 3080 will run at about 86 FPS on the RTX 5080."

This doesn't suffer from the problem of which way you are doing the comparison because the numbers don't change if you do the comparison in the other direction:

"A game that runs at 86 FPS on the RTX 5080 will run at about 60 FPS on the RTX 3080." 

Sunday, July 20, 2025

What the 2008 game Spore should have been like

Like with so many other examples, the 2008 video game "Spore" generated quite some hype prior to its launch, but ended up with a rather lukewarm reception at the end. Unsurprisingly, the end product was nowhere near as exciting and as expansive as the prelaunch marketing hype made you believe.

It was supposed to be one of those "everything" games: It would simulate the development of life from unicellular organisms to galaxy-spanning civilizations, and everything in between, giving players complete freedom on how the species would evolve in between, building entire planetary and galactic civilizations!

The hype was greatly enhanced by the creature editor from the game being published as an independent application prior to launch, giving a picture of what would be possible within the game. And, indeed, by 2008 standards the creature editor was quite ahead of its time, and literally millions of players made their own creations, some of them absolutely astonishing and awesome, and likely better than even the game developers themselves could ever imagine. (For example, people were creating full-sized Tyrannosaurs Rex-like fully animated creatures, which seemed impossible when you first started using the editor, but players found tricks and ways around the limitations to create absolutely stunning results.)

Unsurprisingly, the game proper didn't live up to the hype at all.

Instead of an "everything" game, a "life simulator" encompassing all stages of life from unicellular organisms to galaxy-spanning civilizations, it was essentially just a collection of mini-games that you had to play completely linearly in sequence (with no other options!) until you got to the actual "meat" of the game, the most well-developed part of it, ie. the final "space stage", which is essentially a space civilization simulator.

The game kind of delivered the "simulate life from the very beginning" aspect, but pretty much in name only. Turns out that the game consists of five "stages":

  1. Cell stage, where you control a unicellular creature trying to feed, survive and grow.
  2. Creature stage, where you jump to a multi-cellular creature, where the creature editing possibilities start kicking in.
  3. Tribal stage, at the beginning of which the final form of the creature is finalized and locked, and which is a very small-scale "tribal warfare" simulator of sorts.
  4. Civilization stage, which now has turned into a somewhat simplistic, well, "Civilization" clone, where you manage cities and their interactions with other cities (trading, wars, etc.)
  5. And finally the actual game proper: The space stage, where you'll be spending 90+ % of your playthrough time. This is essentially a galaxy civilization simulator, and by far the most developed and most feature-rich part of the game.

The major problem with all of this is that every single stage is completely independent of every previous stage. Indeed, it literally doesn't matter what you do in any of the previous stages: It has pretty much no effect on the next stage. The only major lasting effect is a purely cosmetic one: The creature design you created at the transition between the creature and tribal stages will be the design shown during the rest of the game. And that's it. That's pretty much the only thing where one stage affects the next ones. And it is indeed 100% cosmetic (ie. it's not like your creature design affects eg. how strong or aggressive the creatures are, for example.)

The other major problem is that the first four stages are relatively short, they have to be played in linear order, and are pretty much completely inconsequential. While each stage is longer than the previous ones, the first four are still quite short (you could probably play through all four of them in an hour or two at most, even if you aren't outright speedrunning the game.)

In other words, Spore is essentially a galactic civilization simulator with some mandatory mini-games slapped at the start. In no way does it live up to the "everything" hype it was marketed as.

Ironically, the developers originally planned for the game to have even more stages than those five (including an "aquatic stage", which I assume would have been between the cell and creature stages, as well as a "city stage" which would have been between the tribal and civilization stages.)

How I think it should have been done instead:

Rather than have a mandatory and strictly linear progression between the stages (which is a horrible idea), start directly at the galactic simulator stage.

In this stage there could be hundreds or even thousands of planets with life forms at different stages (including those that were planned but not implemented). The player could "zoom in" into any of these planets and observe what's happening there and even start playing the current stage on that planet, in order to affect and speed up the advancement of that particular civilization, and create all kinds of different-looking creatures on different planets.

In fact, the player could "seed" a suitable planet by adding single-celled organisms there, which would start the "cell stage" on that planet (which the player could play or just allow to be automatically simulated on its own). If the planet isn't suitable, then he could have one of his existing civilizations terraform it.

The stages themselves should have been developed more, made longer and more engaging and fun, which would entice the player to play them again and again, on different planets.

Moreover, and more importantly: Every stage should have strong effects on the next stages on that particular planet: The choices that the player makes in one stage should have an effect on the next stages. For example certain choices could make the final civilization very intelligent and peaceful, and very good at trading. Other choices made during the different earlier stages could make the civilization very aggressive and prone to conquer other civilizations and go to war with them. And myriads of other choices. (These traits shouldn't be too random and unpredictable: The player should be allowed to make conscious choices about which direction the species goes towards. There could be, for example, trait percentages or progress bars, and every player choice will display how much it affects each trait.)

That would actually make the "mini-games" meaningful and impactful: You shape a particular civilization by how you play those mini-games! The choices you make in the earlier stages have an actual impact and strongly shape what the final civilization will be like, what it's good at, and how it behaves. 

Monday, July 14, 2025

Would The Truman Show have been better as a mystery thriller?

The 1998 film The Truman Show is considered one of the best movies ever made (or if not perhaps in the top 100, somewhere in the top quarter of all movies at least, depending on who you ask).

The film takes the approach where its setting is made clear to the viewers from the very beginning, from the very first shots. In fact, it's outright explained to the viewer in great detail, so there is absolutely no ambiguity or lack of clarity. The viewer knows exactly what the situation is and what's happening, and we are just witnessing Truman himself slowly realizing that something is not as it seems. There isn't much suspense or thriller about the movie because of this, and it's more of a comedy-drama.

The environment in the movie is also quite deliberately exaggerated (because there's no need to hide anything from the viewers, or try to mislead them in any way). The town doesn't even look like a real town and more like an artificial amusement park recreation "town". And that's, of course, completely on purpose: This isn't even supposed to look like an absolutely realistic place. (Of course Truman himself doesn't know that, which is the core point of the movie.) In other words, the movie veers more towards the amusing comedy side than on the realistic side, deliberately. The viewer's interest is drawn to how Truman himself reacts when he slowly starts suspecting and realizing that everything is not right or normal, and how the actors and show producers try to deal with it.

But one could ask: Would it have made the movie worse, the same, or perhaps even better, if it had been a mystery thriller instead? Or would that have been too cliché of a movie plot?

In other words:

  • The viewer is kept in the dark about the true reality of things, and knows exactly as much as Truman himself does. The true nature of what's happening is only revealed as Truman himself discovers it, with the big reveal only happening at the very end of the movie. Before that, it's very mysterious both to Truman and to the viewer.
  • The town is much more realistic and all the actors behave much more realistically, so as to not raise suspicion in the viewer. Nothing seems "off" or strange at first, and it just looks like your regular small American town with regular people somewhere in some island. It could still be a very bright and sunny happy town, but much more realistically depicted.
  • The hints that something may not be as it seems start much more slowly and are much subtler. (For example, no spotlight falling from the sky at the beginning of the movie. The first signs that something might be off come later and are significantly subtler than that.)
  • For the first two thirds of the movie the situation is kept very ambiguous: Is this a movie depicting a man, ie. Truman, losing his mind and becoming paranoid and crazy, or is something else going on? The movie could have been made so that it's very ambiguous if the things Truman is finding out are just the result of his paranoia, or something else. The other people in the movie are constantly "gaslighting" both Truman and the viewer in a very plausible and believable way that he's just imagining things.
  • The reveal in the end could be a complete plot twist. The movie could have been written and made in such a way that even if the viewer started suspecting that Truman isn't actually going crazy and the things he's noticing are actually a sign of something not being as it seems, it's still hard to deduce what the actual situation is, until the reveal at the very end.

Would the movie have been better this way, or would it just have been way too "cliché" of a plot? Who knows. 

Thursday, July 3, 2025

Why I don't watch many speedruns nowadays

About twenty years ago I was a huge fan of watching speedruns. At the time, speedruns of Quake, Doom, Half-Life and Half-Life 2 were some of my all-time favorites. And, in fact, still are (particularly the Quake ones.) Back then I used to watch as many speedruns of as many games I could, as most of them were really interesting. Before YouTube this was a bit more inconvenient, but especially after YouTube was created and speedrunners and speedrunning sites started uploading there, it was a treat.

Glitch abuse was significantly rarer back then (for the simple reason that speedrunners were yet to discover most of the ones that are known today), but even then I found it fascinating. I oftentimes read with great interest the technical descriptions of how particular glitches worked and how they were executed.

The one thing I loved most about speedruns, particularly those of certain games, was the sheer awesome skill involved. Quake and Half-Life 2 (back then) were particularly stellar examples, with the runner zooming through levels at impossible speeds, doing maneuvers that felt almost impossible. It was like watching an extremely skilled craftsman perform some complicated task at an incredible speed and precision. It was absolutely fascinating.

But then, over the years, slowly but surely things started changing.

The main thing that started changing was that speedrunners became enamored with finding and using glitches to make their runs even faster. In fact, many speedrunners became outright "glitch hunters": They would meticulously research, study and test their favorite speedrunning games in order to see if they could figure out and find glitches that would help in completing the games faster. It became a source of great pride and accomplishment when they could announce yet another glitch that saved time, or a setup to make an existing glitch much easier to perform.

Thus, over the years glitch abuse in speedruns started becoming more and more common.

And the thing is, the domain for glitch hunting started being expanded more and more. Not only were they trying to find glitches that could be exploited from within the gameplay itself, using the in-game mechanics themselves, but they started looking more towards the outside for ways to glitch the game: Rather than keep the glitch abuse restricted purely within the confines of the gameplay proper, they started hunting glitches outside of it: In the game's main menu, the game's save and load mechanics, in the options menu, sometimes even completely outside the game itself (which became particularly common in console game speedrunning, ie. trying to find ways to manipulate the console hardware itself in order to affect the game.)

It was precisely Half-Life 2 speedrunning where I started to grow a dislike for these glitches for the first time. You see, back then speedrunners had found that a particular skip could be performed by quick-saving and quick-loading repeatedly. And not just a few times, but literally hundreds of times! That's right: At one point it became so bad that a Half-Life 2 speedrun could literally spend something like 10 minutes doing nothing but quick-saving and quick-loading hundreds of times in quick succession. (I believe that other techniques have since been found that have obsoleted this particular mechanic. I haven't really checked. Doesn't make much of a difference to my point, though.)

I grew a great distaste for this particular glitch execution because it just stepped so far outside the realm of playing the game itself. It was no longer showing great skill at playing the game, and playing within the confines of the gameplay proper. Instead, it was stepping outside of the gameplay proper and affecting it effectively from the outside (after all, quick-saving and quick-loading are not part of the gameplay proper, part of playing the game, advancing towards the end goal. They are a meta-feature that are not part of the gameplay itself.)

I endured this for some years, but at some point I just outright stopped watching Half-Life 2 speedruns. They had become nothing but boring glitch-fests with very little of the original charm and awe left in them anymore. Sure, the runner would still fly through stages at impossible speeds, but this would be marred by boring out-of-game glitch abuse. I just lost interest.

While Half-Life 2 was one of the first games where extremely heavy glitch-hunting and glitch-abuse, particularly of the "abusing non-gameplay meta-features" kind happened, rather obviously it was not the only one. The habit started spreading among all speedrunners of most games like wildfire.

One particularly bad example made me outright want to puke: In a speedrun of the game The Talos Principle, the runner at one point would go to the main menu, go to the game options, set the framerate limit to 30 frames per second, return to the game, perform a glitch (that could only be done with a low framerate), and afterwards go back to the options menu and set the framerate back to unlimited. This was so utterly far-removed from gameplay proper, and was just so utterly disgusting, that I just stopped watching the speedrun right then and there.

Of course there are myriads and myriads of other examples which, while not as disgusting as that, just make the speedruns boring. For example I was watching a speedrun of Dark Souls 3, and every few minutes, even several times a minute, the speedrunner would quit to the main menu and immediately load back in. Why? Because loading times did not count towards the total time of the speedrun, and doing that quit&resume would move the player to a particular location within the level.

The thing is, those particular locations were usually just a few seconds of running from where the speedrunner would quit&resume, and while quit&resuming took something like 10-15 seconds, that loading time wasn't counted towards the speedrun's time, and thus while they made the speedrun overall longer in duration, it saved a couple of seconds each time by the speedrun's official clock. And the speedrunner would do this literally hundreds of times during the run. This was so utterly boring and outright annoying to watch that, once again, I just stopped watching mid-way through.

There would be literally dozens and dozens such examples I could write about, but let me add one more, a very recent one: Very recently Legend Of Zelda: The Wind Waker speedruns have been completely overhauled by a new glitch that has become possible.

Not "discovered". Not "found a new setup that allows doing it more easily." But outright became possible. And what made it possible? The Switch 2, that's what. You see, the Switch 2 runs the game under a new internal emulator which has this curious feature that if the emulated game crashes, the user is allowed to just keep the emulated game running rather than resetting the emulator. And it so happens that one out-of-bounds glitch in Wind Waker causes the game to crash in the original GameCube, but not in the Switch 2, if you opt to allow the game to keep running after the crash. Turns out that after a minute or two the game somehow "recovers" and starts running again... with the playable character being in a completely different room, allowing for big skips.

That's right: This glitch now abuses emulation to make it possible, and it's only possible on the Switch 2, not in the original console. And this is allowed only because the emulator is an official one by Nintendo (such emulator-only glitch abuse is never allowed if using third-party emulators. But apparently it somehow makes it different if the emulator is an official one.)

I can't decide if this is less or more disgusting than the Talos Principle glitch abuse. They are closely matched.

Overall, this is just the top of the iceberg: Glitch abuse has become so utterly prevalent in speedrunning, and such a huge portion of it abuses glitches that effectively affect the gameplay "from the outside", via non-gameplay means, that it has pretty much ruined speedrunning for me. With the exception of just a handful of games (such as Quake, thankfully), long gone are the days when speedruns would just run through the game via sheer skill, without resorting to disgusting outside-of-the-game glitch manipulation.

But what about speedruns that only use "within-the-game" glitches and at no point venture out of gameplay proper? In other words, they never use saving and loading, never to go to the game's menu, never affect the gameplay proper in any way from "outside" of it? Are those A-ok in my books?

Well, for the longest time I didn't mind those glitches and those speedruns. After all, they were effectively "legit" in my book, by my own standards. What's there to complain?

Yet, in later years I have grown tired of those too. The more the speedrun glitches the game, even if it happens fully from within gameplay proper, the more boring I tend to find it. Out-of-bounds glitches in particular I find boring, particularly those that skip huge chunks of levels. Many of them just bypass what made speedruns originally so great entertainment in the first place: Seeing an extremely skillful player beat the game with astonishing precision and speed.

I like to use an analogy for this: Suppose you are going to watch a top-tier professional sports event, like a basketball match: You go there in order to witness the absolute best players in the world show their utter skill at playing the sport. You are expecting 2 hours of sheer excitement and wonder. However, suppose that one of the teams finds an obscure loophole in the rules of the game that allows them to effectively cheat a victory for themselves without even playing the game, the other team having no recourse: The first team just declared victory at the start of the game abusing the loophole, the match ends, and that's it. It's over, everybody go home.

Well, that would be an utter disappointment, and utterly boring. You didn't go there to watch a team abuse a rulebook loophole in order to snatch a quick technical victory without even playing the game. You went there to watch a game! The spectators would be outraged! You would certainly demand your money back!

Well, for me most speedruns that skip major parts of the game are the same: I don't find them interesting in any way. They skip what I was wanting to watch the speedrun for in the first place! They skip the most entertaining part! I didn't "sign up" to watch a player skip the entire game: I did it so that I could see an extremely skilled player play the game, not skip it.

Unfortunately, skips, out-of-bounds glitches and other ways to bypass gameplay proper have become ubiquitous in speedrunning (especially when it comes to 3D games), and only few games have been spared.

That is why I don't really watch much speedrunning anymore. It's just boring. I'm not interested in glitchfests anymore. I would want to see someone play the game skillfully, I'm not interested in seeing someone break the game and skip the majority of it. 

Sunday, June 29, 2025

North Korea is the weirdest country in the world, part 2

Continuing my previous blog post, here I'll deal with the absolutely worst dark side of North Korea: The concentration camps.

While the amount of information about the North Korean concentration camps is extremely limited, what we know is extremely likely at least close to the truth. This information comes from several sources, including satellite imagery, radio and other surveillance, and the testimony of the few defectors who succeeded in escaping these concentration camps and the country. While it is, of course, not 100% certain that all the information is completely accurate, the overall picture is nevertheless most probably at least close to correct (particularly because eyewitness testimony of defectors can largely be corroborated by satellite imagery.)

The North Korean government is so utterly totalitarian, controlling and paranoid, that any dissent, no matter how minor, could land you in a concentration camp. And not only you, but your entire family with you, just as punishment. (This, of course, is designed to act as an even bigger deterrent: If you misbehave it will not only be you who will be sent to the gulag: It will be your wife, your parents, your children, and probably even your siblings.) 

What makes the North Korean concentration camps special is how utterly unique they are. While concentration camps have existed for almost as long as humanity itself, the North Korean ones stand out because of how unlike anything else they are. It's probable that never before, during the entire history of humanity, have there been concentration camps like that in the world. Some might have been somewhat close, but not the same.

There are several things that make these concentration camps unique in the history of humanity:

1) Their sheer size. These are not just some encampments of a few city blocks in size, or the size of a small industrial area, like most of the camps from history. These concentration camps are absolutely enormous! The size of a big city! The fence surrounding these camps (which has been repeatedly confirmed with satellite imagery) not only covers the living quarters and buildings, but large forested areas. They usually are of the size of an entire town plus a good chunk of surrounding forest.

2) The infrastructure within these concentration camps. When one thinks of "concentration camp", the image of rows and rows of barracks immediately comes to mind, like the prison camps of Word War II, with perhaps some factories and other buildings at one side.

However, that's not what these North Korean concentration camps contain (once again corroborated by satellite imagery). Instead, they are often built like enclosed small cities in themselves: They often have a central plaza (with, rather obviously, statues or paintings of the two Dear Leaders), a central promenade, and buildings that somewhat resemble a town or small city, with pretty normal-looking roads, and surrounded by forested areas, agricultural land, and of course factories farther ahead. At a quick glance one would easily confuse them with just normal North Korean towns. Only them being completely surrounded (at quite some distance) by a fence and dozens and dozens of guard towers, and with a couple of very clear guarded entrance gates, gives away that they aren't normal towns.

3) Many inhabitants were born inside the concentration camps, and have never left in their entire lives. Moreover, they know pretty much nothing of the outside world.

Where it becomes absolutely dystopian and a stuff of dark sci-fi is that the few defectors who have successfully escaped tell that not only are the inhabitants kept in a complete information blackout, but moreover they are being told that there is no use in even trying to escape because the entirety of the outside world is a completely uninhabited wasteland, a toxic desert where they would die in a few hours, a few days at most. They are told that the fence surrounding the area is actually to keep the outside dangers out, and that it's way too dangerous for them to venture out. That the town where they live is the only safe place in the world, and is the only place where people still exist.

4) And, rather obviously, these are forced labor camps, where all people from about age 5 up are forced to work for 10 to 12 hours every day. (The propaganda being, obviously, that they need to do that work to survive, that it's necessary for their entire population to be able to live, that everybody has to do their part, and anybody who is lazy and doesn't participate will be harshly punished because that's necessary for the survival of everybody.)

There have been myriads of concentration camps during the history of humanity, but nothing compares to this. Some might come close, but not quite. The sheer physical size, the infrastructure, the buildings, the multi-generational inhabitants, the absolutely insane propaganda fed to the inhabitants, it's just astonishing. It's literally like Shyamalan's The Village meets dark dystopian sci-fi. (There was an episode of a sci-fi TV series, I think it was Stargate, that depicted a concentration camp similar to this, in other words, the inhabitants were multi-generational and had been fed the lie that the entirety of the rest of the world was a dangerous polluted uninhabitable wasteland and thus it was too dangerous to venture outside. As far as I know, this plot was heavily inspired by North Korean concentration camps.)