Wednesday, September 10, 2025

What ChatGPT can be so addictive

If a mere 10 years ago you had told me, or pretty much anybody who's even slightly knowledgeable of technology and computers, that in less than 10 years we will get AIs that can have perfect conversations, in perfect fluent English (and even multiple other languages), understanding everything you say and ask and responding accordingly, in the exact same way as any human would do, and even better, writing perfect essays about the subject if you wanted, in any topic whatsoever, no matter if it's a highly technical scientific topic, or psychology, or pop culture, or anything, and that it could even compose very good and reasonable poetry, song lyrics, short stories, and other similar creative output, I would have laughed at you, and most other tech-savvy people would have laughed as well.

Who would have guessed a mere 10 years ago that almost perfectly fluent conversational AI, writing perfect fluent English on any topic you could imagine, both in terms of grammar as well as the substance of the contents, was less than a decade in the future.

But here we are, with exactly that.

And the thing is, these AIs, such as ChatGPT, can be incredibly addictive. But why?

The addictiveness does not come from the AI being helpful, eg. when asking it for some information or how to do something, or where to find something. That just makes it a glorified googling tool, telling you much faster and much more easily the answer to the thing you are looking for.

That's very useful, but it's not what makes it addictive. The addictiveness comes from just having casual conversations with the AI, rather than it having an actual goal or purpose. Bug again, why? 

There are several reasons:

1) You can have a completely fluent casual conversation with the AI like you could with any intelligent conversational friend. Whatever you want to talk about, the AI can respond to it intelligently and on topic.

2) The responses are usually very interesting. Sometimes it will tell you things you already knew, but probably in more detail than a normal person would. Oftentimes it may tell you things you didn't know, and which might be interesting tidbits of information.

3) The AI is like a really smart person who knows about every possible topic and is always happy to talk about it. Be it astrophysics, or mathematics, or computer science, or programming, or physics, or psychology, or sociology, or pop culture, or history, or a classic movie or song, or the culinary arts, or even just some random topic about some random subject, the AI pretty much always knows about the topic and can give you answers and information about it. There's no topic to which it will just answer "I don't really know much about that." (There are some inappropriate topics that AIs have been explicitly programmed to refuse to answer, or the running system might even outright stop them from answering, giving you an error message, but that's understandable.)

4) Moreover, it's able to adjust its level of conversation to your own knowledge and capabilities. It won't start throwing advanced mathematical formulas at you if your question is at the level of casual conversation, but will show you advanced technical stuff if you ask it to and show that you understand the topic.

5) And here's where it starts getting so addictive: The AI is always available, is always happy to talk with you, never gets tired, is never "not in the mood". It doesn't need to sleep, it doesn't need to take breaks, it doesn't get tired (physically or mentally), it never gets the feeling that it doesn't want to talk at that moment. It never tells you "not now."

6) Likewise it never gets exasperated, it never gets tired of you asking stuff, it never considers your questions stupid. It doesn't matter if your questions and commentary is on the PhD level or at the kindergarten level, it's always happy to have a conversation and will always do so in a positive tone, without getting exasperated, without using a patronizing tone.

7) And on a similar vein, the AI never gets offended, never gets exasperated, never considers any question "too stupid", and never gets tired of speaking to you. You could continue on the same topic, repeating similar questions again and again, and it will never get tired of answering, always politely. You could even have an argument with it, where you stubbornly refuse to accept what it's saying, again and again, for hours on end, and the AI is literally incapable of becoming frustrated, tired or offended by your responses. It will never respond in kind, and will always retain a polite tone and will always be happy to keep answering your questions and objections, no matter what.

8) Most AIs, such as ChatGPT, are also programmed to be a bit of an agreeable "yes man": If you express a personal opinion on a subjective topic, it will tend to support your opinion and tell you what its strengths are. It will very rarely outright start arguing with you and tell you that you are wrong in your opinion (unless it's something that's clearly wrong, eg. very clearly and blatantly unscientific. But even in those cases it might show some understanding of where you are coming from, if you express your opinion reasonably enough.)

9) But, of course, if you explicitly ask it to present you both sides of a position, it will write a mini-essay on that, giving supporting arguments for them.

But it's precisely that aspect of it being a tireless and agreeable "person" who is always willing and knowledgeable to have a conversation about pretty much any topic, that can make it so addictive. Friends are not always available, friends don't necessarily know every topic, friends are not always agreeable, friends can get offended or tired, but ChatGPT is incapable of doing that. It's always there, ready to have a conversation.

That can be incredibly addictive. 

Sunday, August 31, 2025

HDR is a pain in the ass

Almost thirty years ago the PNG image format was developed as a significantly better and more modern alternative for lossless image storage than any of the existing ones (particularly GIF). It did many things right, and it compiled into one all the various features of images that almost none of the other existing formats supported simultaneously (eg. GIF only supports 256 colors, which is horrendous, and only a single-color transparency. TIFF and several other formats support almost all image format features but have extremely poor compression. And so on.)

However, there was one thing that the PNG format tried to do "right" but which ended up causing a ton of problems and become a huge pain in the ass for years and years to come, particularly when support for the format became widespread, including by web browsers. And that feature was support for a gamma correction setting.

Without going into the details of what gamma correction is (as this can be easily found online), it's an absolute swamp of complications, and with the standardization and widespread support for PNG it became a nightmare for at least a decade.

In the vast, vast majority of cases, particularly when using image files in web pages, people just want unmodified pixels: Black is black, white is white, 50% gray is 50% gray, and everything in between. Period. If a pixel has RGB values (10, 100, 200), then they want those values being used as-is (eg. in a web page), not modified in any way. Particularly, if you eg. specify a background or text color of RGB (10, 100, 200) in your web page, you definitely want that same value to visually match exactly when used in an image.

When PNG became widely supported and popular in web pages, its gamma specification caused a lot of problems. That's because when a gamma value is specified in the file format, a conforming viewer software (such as a web browser) will change those pixel values accordingly, thus making them look different. And the problem is that not only do different systems use different gamma values (most famously Windows and macOS used, maybe even still use, different values for gamma), but support for gamma correction varied among browsers, some of them supporting it others not.

"What's the problem? PNG supports a 'no gamma' setting. Just leave the gamma setting out. Problem solved." Except that the PNG standard, at least back then, specifically said that if gamma wasn't specified in the file, for the viewing software to assume a particular gamma (I think it was 2.2). This, too, caused a lot of problems because some web browser were standard-conformant in this regard, while others didn't apply any default gamma value at all. This meant that even if you left the gamma setting out of the PNG file, the image would still look different in different browsers.

This was a nightmare because many web pages assumed that images will be shown unmodified and thus colors in the image will match those used elsewhere in the page. 

I think that in later decades the situation has stabilized somewhat, but it can still raise its ugly head.

I feel that HDR currently is similar to the gamma issue in PNG files in that it, too, causes a lot of problems and is a real pain in the ass to deal with.

If you buy a "4k HDR" BluRay, most (if not all) of them assume that you have a HDR-capable BluRay player and television display. In other words, the BluRay will only contain an HDR version of the video. Most of them will not have a regular non-HDR version.

What happens if your TV does not support HDR (or the support is extremely poor), and your BluRay player does not support "un-HDR'ing" the video content? What happens is that the video will be much darker and with very wrong color tones, and look completely wrong.

This is the exact situation, at least currently, with the PlayStation 5: It can act as a BluRay player, and supports 4k HDR BluRay discs, but (at least as of writing this) it does not have any support for converting HDR video material to non-HDR video (eg. by clamping the ultra-bright pixels to the maximum non-HDR brightness) and will just send the HDR material to the TV as-is. If your TV does not support HDR (or it has been turned off because it makes the picture look like ass), the video will look horrendous, with completely wrong color tones, and much darker than it should be.

(It's a complete pain in the ass. If I want to watch such a BluRay disc, I need to go and turn HDR support on, after which the video will look acceptable, even though my TV has very poor-quality HDR support. But at least the color tones will be ok. Afterwards I need to go back and turn HDR off to make games look ok once again. This is such a nuisance that I have stopped buying 4k HDR BluRays completely, after the first three.)

More recently, I got another instance of the pain-in-the-ass that's HDR, and it happened with the recently released Nintendo Switch 2.

Said console supports HDR. As it turns out, if your TV supports HDR, the console will turn HDR on, and there's no option to turn it off, anywhere. (There's one setting that claims to to turn it off for games, but it does nothing.)

I didn't realize this and wondered for some weeks why the picture looks like ass. It's a bit too bright, everything is a bit too washed.out, with the brightest pixels just looking... I don't know... glitched somehow. The thing is that I had several points of direct comparison: The console's own display, as well as the original Switch: In those, the picture looks just fine. However, on my TV the picture of the Switch 2 looks too washed-out, too low-contrast, too bright. And this even though I had adjusted the HDR brightness in the console's settings.

One day this was bothering me so much that I started browsing the TV's own menus, until I found buried deep within layers of settings one that turned HDR support completely off for that particular HDMI input.

And what do you know, the picture from the Switch 2 immediately looked perfect! Rich colors, rich saturation, good contrast, just like on the console's own screen.

The most annoying part of all of this is that, as mentioned, it's literally not possible to reach this state using the console's own system settings. If your TV tells it that it supports HDR, it will use HDR, and there's nothing you can do in the console itself to avoid that. You have to literally turn HDR support off in the settings of your TV to make the console stop using it.

The PlayStation 5 does have a system setting that turns HDR completely off from the console itself. The Switch 2 does not have such a setting. It's really annoying.

The entire HDR thing is a real pain in the ass. So far it has only caused problems without any benefits, at least to me. 

Thursday, August 14, 2025

The complex nature of video game ports/rereleases/remasters

Sometimes video game developers/publishers will take a very popular game of theirs that was released many years prior, and re-release it, often for a next-gen platform (especially if talking about consoles).

Sometimes the game will be completely identical to the original, simply ported to the newer hardware. This can be particularly relevant if a newer version of a console isn't compatible with the previous versions, allowing people who own the newer console but have never owned the older one to experience the game. (On the PC side this can be the case with very old games that are difficult if not impossible to run in modern Windows, at least not without emulation and thus probably not using a copy that has been legally purchased.

Othertimes the developers will also take the opportunity to enhance the game in some manner, improving the graphics and framerate, perhaps remaking the menus, and perhaps polishing some details (such as the controls).

Sometimes these re-releases can be absolutely awesome. Othertimes not so much, and feel more like cheap cash grabs.

Ironically, there's at least one game that's actually an example of both: The 2013 game The Last of Us.

The game was originally released for the PlayStation 3 in June of 2013, and was an exclusive for that console. Only owners of that particular console could play it.

This was, perhaps, a bit poorly timed because it was just a few months before the release of the PlayStation 4 (which happened on November of that same year).

However, the developers announced that an enhanced PlayStation 4 version would be made as well, and it was published on July of 2014, with the name "The Last of Us Remastered".

Rather than just going the lazy way of releasing the exact same game for both platforms, the PlayStation 4 version was indeed remastered with a higher resolution, better graphics, and higher framerate, and it arguably looked really good on that console.

From the players' point of view this was fantastic: Even people who never owned a PS3 but did buy the PS4 could experience the highly acclaimed game, rather than it being relegated as an exclusive to an old console (which is way too common). This is, arguably, one of the best re-releases/remasters ever made, not just in terms of the improvements but more importantly in terms of allowing gamers to experience the game who wouldn't have otherwise.

Well, quite ironically, the developers later decided to make the same game also one of the worst examples of useless or even predatory "re-releases". From one of the most fantastic examples, to one of the worst.

How? By re-releasing a somewhat "enhanced" version exclusively for the PlayStation 5 and Windows in 2022, with the name "The Last of Us Part I". The exact same game, again, with somewhat enhanced graphics for the next generation of consoles and PC.

Ok, but what makes that "one of the worst" examples of bad re-releases? The fact that it was sold at the full price of US$70, even for those who already own the PS4 version.

Mind you: "The Last of Us Remastered" for the PS4 is still perfectly playable on the PS5. It's not like PS5 owners who don't own the PS4 cannot play and thus experience it.

It was not published as some kind of "upgrade pack" for $10, like is somewhat common. It was released as its own separate game for full price, on a platform that's still completely capable of running the PS4 version. And this was, in fact, a common criticism among reviewers (both journalists and players.)

Of course this is not even the worst example, just one of the worst. There are other games that could be argued to be even worse, such as the game "Until Dawn", originally for the PS4, later re-released for the PS5 with somewhat enhanced graphics, at full price. While, once again, the original is still completely playable on the PS5.

Wednesday, August 6, 2025

"Dollars" vs "cents" notation confusion in America

There's a rather infamous recorded phone call, from maybe 20 years or so ago, where a Verizon customer calls to customer support to complain that their material advertised a certain cellphone internet connectivity plan to cost ".002 cents per kilobyte", but he was charged 0.002 dollars (ie 0.2 cents) per kilobyte.

It's quite clear that the ad meant to say "$0.002 per kilobyte", but whoever had written the ad had instead written ".002c per kilobyte" (or ".002 cents per kilobyte", I'm not sure as I have not seen the ad). (It's also evident from the context that the caller knew this but wanted to deliberately challenge Verizon for their mistake in the ad, as false advertisement is potentially illegal.)

I got reminded of this when I recently watched a video by someone who, among other things, explained how much money one can get in ad revenue from YouTube videos. He explains that his best-earning long form video has earned him "6.33 dollars per thousand views", while his best-earning shorts video has earned him "about 20 cents per thousand views". Crucially, while saying this he is writing these numbers, and what does he write? This:


In other words, he says "twenty cents", but rather than write "$0.20" or, alternatively, "20 c", he writes "0.20 c".

Obviously anybody who understands the basics of arithmetic knows that "0.20 c" is not "20 cents". After all, you can literally read what it says: "zero point two zero cents", which rather obviously is not the same thing as "twenty cents". It should be obvious to anybody that "0.20 c" is a fraction of a cent, not twenty entire cents (in particular, it's one fifth of a cent). The correct notation would be "$0.20", ie. a fraction of a dollar (one fifth).

This confusion seems surprisingly common in the United States in particular, even among people who are otherwise quite smart and should know better. But what causes this?

Lack of education, sure, but what exactly makes them believe this? Why do they believe this rather peculiar thing?

I think that we can get a hint from that phone call to Verizon. During that phone call the customer support person, when explicitly asked, very explicitly and clearly stated that ".002 cents" and ".002 dollars" mean the same thing. When later in the call the manager took over the call, he said the exact same thing.

Part of this confusion seems to indeed be the belief that, for example, "20 cents", "0.20 cents" and "0.20 dollars" all mean the same thing. What I believe is happening is that these people, for some reason, think that these are some kind of alternative notations to express the same thing. They might not be able to explain why there are so many notations to express the same thing, but I would imagine that if asked they would guess that it's just a custom, a tradition, or something like that. After all, there are many other quantities that can be expressed in different ways, yet mean the same thing.

It gives credibility to this hypothesis that, also, in that phone call to Verizon, the customer support person repeatedly says that the plan costs "point zero zero two per kilobyte", without mentioning the unit. Every time she says that, the customer explicitly asks "point zero zero two what?" and she clearly hesitates, and then says "cents". Which, of course, is the wrong answer, as it should be "dollars". But she doesn't seem to understand the difference.

What I believe happened there (and is happening with most Americans who have this same confusion) is that they indeed believe that something like "0.002", or ".002", in the context of money, is just a notation for "cents", all by itself. That if you want to write an amount of "cents", you use a dot and then the cents amount. Like, for example, if you wanted to write "20 cents", you would write a dot (perhaps preceded by a zero) and then the "20", thus "0.20" all in itself meaning "20 cents". And if you wanted to clarify that it indeed is cents, you just add the "¢" at the end.

They seem to have a fundamental misunderstanding of what the decimal point notation means and signifies, and appear to believe that it's just a special notation to indicate cents (and, thus, that "20 cents" and "0.20 cents" are just two alternative ways to write the same thing.)

Of course the critics are right that this ultimately stems from a lack of education: The education system has not taught people well enough the decimal system and how to use it. Most Americans have learned it properly, but then there are those who have fallen between the cracks and haven't got the proper education on the decimal system and arithmetic in general.

Sunday, August 3, 2025

How the Voldemort vs. Harry final fight should have actually been depicted in the movie

The movie adaptation of the final book in the Harry Potter series, Deathly Hallows: Part 2, makes the final fight between Harry and Voldemort flashy but confusing, leaving the viewers completely unclear about what exactly is happening and why, and does not convey at all the lore in the source material.

How the end to the final fight is depicted in the movie is as follows:

1) Voldemort and Harry cast some unspecified spells at each other, being pretty much a stalemate.


2) Meanwhile elsewhere Neville kills Nagini, which is the last of Voldemort's horcruxes.


3) Voldemort appears to be greatly weakened by this, so much so that his spell just fizzles out, at the same time as Harry's.

 

4) Voldemort is shown as greatly weakened, but he still casts another unspecified spell, and Harry responds with also an unspecified spell.


5) However, Voldemort's spell quickly fades out, and he looks completely powerless, looking at his Elder Wand with a puzzled or perhaps defeated look, maybe not understanding why it's not working, maybe realizing that it has abandoned him, or maybe just horrified at having just lost all of his powers. Harry's spell also fizzles out; it doesn't touch Voldemort.

6) Harry takes the opportunity to cast a new spell. He doesn't say anything but from its effect it's clear it's an expelliarmus, the disarming spell. 

 7) Voldemort gets disarmed and he looks completely powerless. The Elder Wand flies to Harry.

8) Voldemort starts disintegrating.


So what is depicted in the movie it looks like Neville destroying Nagini, Voldemort's last horcrux, completely zapped him of all power, and regardless of making a last but very powerless effort, he gets easily disarmed by Harry, and then just disintegrates, all of his power and life force having been destroyed.

In other words, it was, in fact, Neville who killed Voldemort (even if a bit indirectly) by destroying his last source of power, and Harry did nothing but just disarm him right before he disintegrated.

However, that's not at all what happened in the books.

What actually happened in the books is that, while Neville did kill Nagini, making Voldemort completely mortal, that's not what destroyed him. What destroyed him was that he cast the killing curse at Harry, who in turn immediately cast the disarming spell, and because the Elder Wand refused to destroy its own master (who via a contrived set of circumstances happened to be Harry Potter), Voldemort's killing curse rebounded back from Harry's spell and hit Voldemort himself, who died of it.

In other words, Voldemort destroyed himself with his own killing curse spell, by having it reflected back, because the Elder Wand refused to kill Harry (its master at that point).

This isn't conveyed at all in the movie.

One way this could have been depicted better and more clearly in the movie would be, for example:

When Neville destroys Nagini, Voldemort (who isn't at that very moment casting anything) looks shocked and distraught for a few seconds, then his shock turns into anger and extreme rage, and he casts the killing curse at Harry, saying it out loud (for dramatic effect the movie could show this in slow motion or in another similar manner), and Harry immediately responds with the disarming spell (also spelling it out explicitly, to make it clear which spell he is casting.)

Maybe after a second or two of the two spell effects colliding with each other, the movie clearly depicts Voldemort's spell rebounding and reflecting from Harry's spell, going back to Voldemort and very visibly hitting him. Voldemort looks at the Elder Wand in dismay, then at Harry, then his expression changes to shock when he realizes and understands, at least at some level, what just happened. He looks again at his wand and shows an expression of despair and rage, but now Harry's new disarming spell knocks it off his hand, and he starts disintegrating.

Later, in the movie's epilogue, perhaps Harry himself could give a brief explanation of what happened: That the Elder Wand refused to kill its own master, he himself, and that Voldemort's killing curse rebounded and killing its caster. 

Thursday, July 31, 2025

Matt Parker (inadvertently) proves why algorithmic optimization is important

Many programmers in various fields, oftentimes even quite experienced programmers, have this notion and attitude that optimization is not really all that crucial in most situations. So what if a program takes, say, 2 seconds to run when it could run in 1 second? In 99.99% of cases that doesn't matter. The important thing is that it works and does what it's supposed to do.

Many will often quote Donald Knuth, who in a 1974 article wrote "premature optimization is the root of all evil" (completely misunderstanding what he actually meant), and interpret that as meaning that one should actually avoid program optimization like the plague, as if it were somehow some kind of disastrous detrimental practice (not that most of them could ever explain why. It just is, because.)

Some will also reference some (in)famous cases of absolutely horrendous code in very successful programs and games, the most famous of these cases being probably the video game Papers, Please by Lucas Pope, which source code apparently is so horrendous that it would make any professional programmer puke. Yet, the game is enormously popular and goes to show (at least according to these people) that the actual quality of the code doesn't matter, what matters is that it works. Who cares if the source code looks like absolute garbage and the game might take 5% of resources when it could take 2%? If it works, don't fix it! The game is great regardless.

Well, I would like to present a counter-argument. And this counter-example comes from the youtuber and maths popularizer Matt Parker, although inadvertently so.

For one of his videos, related to the game Wordle (where you have to guess a 5-letter word by entering guesses and it showing correct letters with colors) he wanted to find out if there are any groups of 5 words of length 5 that use unique letters, in other words 25 different letters in total.

To do this he wrote some absolutely horrendous Python code that read a comprehensive word list file and tried to find a combination of five such words.

It took his program an entire month to run!

Any experienced programmer, especially those who have experience on such algorithms, should have alarm bells ringing loudly at this point, as this sounds like something that could be quite trivially done in a few seconds, probably even in under a second. After all, the problem is extremely restricted in its constraints, which are laughably small: Just five-letter words (can discard every other word), which even in the largest English dictionaries should be just a few thousands of them, and checking if a word contains only unique letters (which ought to restrict the list of words to a small fraction, as all the other 5-letter words can be discarded from the search), and finding combinations of five such words that share no common letters.

And, indeed, the actual programmers among his viewers immediately took the challenge and quite quickly wrote programs in various languages that solved the problem in a fraction of a second.

So yes, indeed: His "I don't really care about optimization as long as it does what it's supposed to do" solution took one entire month to calculate something that could be solved in a fraction of a second. Even with an extremely sloppily written program using a slow language it should only take a few seconds.

This is not a question of "who cares if the program takes 10 seconds to calculate something that could be calculated in 5 seconds?"

This is a question of "calculating in one month something that could be calculated in less than a second."

Would you be willing to wait for one entire month for something that could be calculated in less than a second? Would you be using the "as long as it works" attitude in this case?

It's the perfect example of why algorithmic optimization can actually matter, even in unimportant personal hobby projects. It's the perfect example of something where "wasting" even a couple of days thinking about and optimizing the code would have saved him a month of waiting. It would have been actually useful in practice.

(And this is, in fact, what Donald Knuth meant in his paper. His unfortunate wording is being constantly misconstrued, especially since it's constantly being quote-mined and taken out of its context.)

Thursday, July 24, 2025

"n% faster/slower" is misleading

Suppose you wanted to promote an upgrade from an RTX 3080 card to the RTX 5080. To do this you could say:

"According to the PassMark G3D score, the RTX 5080 is 44% faster than the RTX 3080."

However, suppose that instead you wanted to disincentivize the upgrade. In that case you could say:

"According to the PassMark G3D score, the RTX 3080 is only 30.6% slower than the RTX 5080."

Well, that can't be right, can it? At least one of those numbers must be incorrect, doesn't it?

Except that both sentences are correct and accurate!

And that's the ambiguity and confusion between "n% faster" and "n% slower". The problem is in the direction we are comparing, in other words, in which direction we are calculating the ratio between the two scores.

The RTX 3080 has a G3D score of 25130.

The RTX 5080 has a G3D score of 36217.

If we are comparing how much faster the latter is to the former, in other words, how much larger the latter score is than the former score, we do it like:

36217 / 25130 = 1.44118  →  44.1 % more (than 1.0)

However, if we are comparing how much slower the former is than the latter, we would do it like:

25130 / 36217 = 0.693873  →  30.6 % less (than 1.0)

So both statements are actually correct, even though they show completely different percentages.

The fundamental problem is that this kind of comparison is mixing ratios with subtractions, which leads to uneven results depending on which direction we are making the comparison. When only one of the ratios is presented (as is most usual), this can skew the perspective of how much the performance improvement actually is.

A more unambiguous and accurate comparison would be to simply give the factor. In other words:

"According to the G3D score, the speed factor between the two cards is 1.44."

However, this is a bit confusing and not very practical (and could also be incorrectly used in comparisons), so an even better comparison between the two would be to just use example frame rates. For example:

"A game that runs at 60 FPS on the RTX 3080 will run at about 86 FPS on the RTX 5080."

This doesn't suffer from the problem of which way you are doing the comparison because the numbers don't change if you do the comparison in the other direction:

"A game that runs at 86 FPS on the RTX 5080 will run at about 60 FPS on the RTX 3080."