Sunday, December 14, 2025

Programmers are strangely dogmatic about cryptic variable names

As a very long-time computer programmer, I have noticed an interesting psychological phenomenon: For some reason beginner programmers absolutely love when they can express a lot of functionality with as little code as possible. They also tend to love mainstream programming languages that allow them to do so.

As an example, if they are learning C, and they encounter the fact that you can implement the functionality of strcpy() with a very short one-liner (directly with C code, without using any standard library function), a one-liner that's quite cryptic compared to how it's done in an average programming language, they just love that stuff and get enamored with it.

This love of brevity in programming quite quickly becomes a habit to the vast majority of programmers, and most of them never "unlearn" said habit. One of the most common and ubiquitous language feature where most programmers will apply this love of brevity is in variable names. (Oftentimes they will do the same with other names in the program as well, such as function and class names, but variable names are the most common target for this, even when they keep those other names relatively clear.)

It becomes an instinct that's actually hard to get rid of (and I'm speaking of personal experience): That strong instinct of using single-letter variables, abbreviations and acronyms, no matter how cryptic and unreadable they may be to the casual reader. (Indeed, their most common defense of those names is "I can understand them perfectly well", without acknowledging that it may not be so clear to others reading the code. Or even to they themselves five years along the line.)

Thus, they will use variable names like for example i instead if index, pn instead of player_name, ret instead of return_value, col instead of column (or even better, column_index), col instead of colorrc instead of reference_count, and so on and so forth. After all, why go through the trouble of writing "return_value" when you can just write "ret"? It's so much shorter and convenient!

But the thing is, the more the code is littered with cryptic short variable names, the harder it becomes to read and understand to someone reading (and trying to understand) the code. I have got an enormous amount of experience on that, as I have had in the past to write an absolutely humongous amount of unit tests for a huge existing library. The thing about writing unit tests for existing code is that you really, really need to understand what the code is doing in order to write meaningful unit tests for it (especially when you are aiming for 100% code coverage).

And, thus, I have seen a huge amount of code that I have had to fully understand (in order to write unit tests for), and I have seen in intricate detail the vast difference in readability and understandability between code that uses cryptic variable and function names vs. code that uses clear readable names. Unsurprisingly, the latter helps quite a lot.

In fact, this is not some kind of niche and novel concept. The coding guidelines of many huge companies, like Google, Microsoft, Facebook and so on, have sections delineating precisely this. In other words, they strongly recommend using full English words in variable and function names rather than abbreviations and cryptic acronyms. One common principle is, relating to the latter: "If the acronym does not have an article in Wikipedia, just write it fully out."

One particular situation where I have noticed how much clear variable naming helps is in loop variables. Loop variables are the one thing that most programmers abbreviate the most, and the most often. Sometimes they go to outright unhealthy lengths to use loop variables that are as short as possible, preferably single-letter, even if that means using meaningless cryptic names like i, j and k.

I have, myself, noticed the importance of, and gotten into the habit of, naming loop variables after their use, ie. what they represent and are being used for. For example, let's say you are iterating through an array of names using a loop, with the loop variable indexing said array. Thus, I will name said variable eg. name_index rather than just i (which is way too common.) If the loop variable is counting something, I will usually name it something_count, or similar, rather than just i or n.

The longer the body of the loop is, and especially if there are several nested loops, the more important it becomes to name the loop variables clearly. It helps immensely keep track of and understand the code when the loop variables are directly naming what they represent, especially alongside naming everything else clearly. For example, suppose you see this line of code:

pd[i].n = n[i];

That doesn't really tell us anything. Imagine, however, if we changed it to this:

player_data[player_index].name = names[player_index];

Is it longer? Sure. But is it also significantly clearer? Absolutely! Even without seeing any of the surrounding code we already get a very good idea of what's happening here, much unlike with the original version.

Yet, try to convince the average experienced programmer, who is used to litter his code with short cryptic variable names, of this. You will invariably fail. In fact, for some reason the majority of computer programmers are strangely dogmatic about it. They are, in fact, so dogmatic about it that if you were to make this argument in an online programming forum or discussion board, you will be likely starting a full on flamewar. They will treat it like you are a stupid arrogant person who has personally insulted them to the core. I'm not exaggerating.

The instinct to write short cryptic code, very much including the use of short cryptic variable names, sits very deep in the mind of the average programmer. It's a strange psychological phenomenon, really.

I have concocted a name for this kind of cryptic programming style: Brevity over clarity. 

Monday, December 8, 2025

I'm tired of "viral math problems" involving PEMDAS

In recent years (or perhaps the last decade or so) there has been a rather cyclic phenomenon of a "viral math problem" that somehow stumps people and reveals that they don't know how to calculate it. It seems that every few months the exact same problem (with just perhaps the numbers involved changed) makes the rounds. And it always makes me think: "Sigh, not this again. It's so tiresome."

And the "viral math problem" is a simple arithmetic expression which, however, has been made confusing and obfuscated not only by using unclear operator precedence but, moreover, by abusing the division symbol ÷ instead of using fractional notation. Pretty much invariably the "problem" involves having a division followed by a multiplication, which is what introduces the confusion. A typical version is something like:

12 ÷ 3(2+2) = ?

This "problem" is so tiresome because it deliberately uses the ÷ symbol to keep it all in one line instead of using the actual fractional notation (ie. a horizontal line with the numerator above it and the denominator below it) which would completely disambiguate the expression. And, of course, it deliberately has the division first and the multiplication after that, causing the confusion.

This is deliberately deceptive because, as mentioned, the normal fractional notation would completely disambiguate the expression: If the division of 12 by 3 is supposed to be calculated first and then the result then multiplied by (2+2), then the fraction would have 12 at the top, 3 on the bottom, and the (2+2) would follow the fraction (ie. be at the same level as the horizontal line of the fraction).

If, however, the 12 is supposed to be divided by the result of 3(2+2) then that entire latter expression would be in the denominator, ie. below the fraction line.

That clearly and uniquely disambiguates the notation. Something that "12 ÷ 3(2+2)" quite deliberately does not.

Many people would think: "What's the problem? It's quite simple: Follow so-called PEMDAS, where multiplication and division have the same precedence, and operators of the same precedence are evaluated from left to right. In other words, calculate 12 by 3 first, then multiply the result by (2+2)."

Except that it's not that simple. It so happens that "PEMDAS" does not really deal with the "implied multiplication", ie. the symbolless product notation, such as when you write "2x + 3y", which has two implied products.

The fact is that there is no universal consensus on whether the implied product should have a higher precedence than explicit multiplication and division. And the reason for this is that in normal mathematical notation the distinction is unnecessary because you don't get these ambiguous situations, and that's because the ÷ symbol is not usually used to denote division alongside implied multiplication.

In other words, there is no universal consensus on whether "1 ÷ 2x" should be interpreted as "(1÷2)x" or "1 ÷ (2x)". People have actually found published physics and math papers that actually use the latter interpretation, so it's not completely unheard of.

The main problem is that this is deliberately mixing two different notations: Usually the mathematical notation that uses implied multiplication does not use ÷ for division, instead using the fraction notation. And usually the notation that does use ÷ does not use implied multiplication. These are two distinct notations (although not really "standardized" per se, which only adds to the confusion.)

Thus, the only correct answer to "how much is 12 ÷ 3(2+2)?" is: "It depends on your operator precedence agreement when it comes to the ÷ symbol and the implied multiplication." In other words, "tell me the precedence rules you want to use, and then I'll tell you the answer, because it depends on that."

(And, as mentioned, "PEMDAS" is not a valid answer to the question because, ironically, that's ambiguous too. Unless you take it literally and consider ÷ and implied multiplication to be at the same precedence level, and thus to be evaluated from left to right. But you would still want to clarify that that's what's meant.)

Also somewhat ironically, even if instead of implied multiplication we borrowed the actual arithmetic notation for multiplication from the same set as the ÷ symbol, in other words, the expression would be:

12 ÷ 3×(2+2)

that would still be ambiguous because even here there is no 100% consensus.

The entire problem is just disingenuous and specifically designed to confuse and trick people, which is why I really dislike it and am tired of it.

An honest version of the problem would use parentheses to disambiguate. In other words, either:

(12÷3)×(2+2)

or

12 ÷ (3×(2+2))

Wednesday, November 12, 2025

Signs that a bodybuilder is not a "nattie" (ie. uses PEDs)

Most bodybuilders are very open about the fact that they use PEDs (ie. "performance enhancing drugs", like steroids, human growth hormone, and a multitude of others.)

However, there are likewise many that are not so open about it, particularly the ones who are social media "influencers" and, especially, if they are trying to sell you something. But even if they aren't trying to sell you products, many of them claim to be "natties" just for the clout and fame, just to get the admiration of people, showing what you can achieve if you work enough for it.

Sometimes it can actually be quite difficult to just outwardly see if someone is a "natural" or is taking PEDs. For those bodybuilders who are extreme mass monsters, with biceps larger than a normal person's head, it's quite clear, as nobody can naturally get muscles that large.

For less large bodybuilders, however, it can be harder to discern. However, there are some signs to look for.

Note that none of these are 100% certain proof, but they add to the evidence. The more of these that can be spotted the less likely the guy is a "natural" and is using PEDs.

1) The rather obvious one: Acne, particularly on the upper back, sometimes even the upper chest, shoulders and face. While some people have naturally acne in those areas even though they take nothing, it's a very telling sign.

2) Another quite obvious one: Extreme "mass monsters" (Schwarzenegger-sized and bigger) are almost certainly not natural. For someone to reach those sizes completely naturally it requires extraordinary genetics (essentially, the body itself naturally produces the steroids that one would usually take externally.) While not impossible, it's extremely unlikely.

Those are the most commonly known ones. However, there are also lesser known things to look for:

3) Neck and shoulders, especially the neck. Steroids have a particularly strong effect on those two (there are physiological and biological reasons for this), and they don't grow nor thicken even nearly as much without them. Most steroid users will have really tick necks, even unnaturally so. Shoulders are also notoriously difficult to grow naturally. People not taking steroids will usually have more normal necks and shoulders. (However, this doesn't rule out other PEDs.)

4) Unnaturally thick veins. One side-effect of PEDs is that they enlarge veins, particularly those close to the skin. If you see a ripped bodybuilder with really thick veins visible, it's an almost certain sign of PEDs. Normally people don't have veins that thick. They tend to be quite narrow.

5) Being very muscular and very ripped. Being "ripped" is that look when your body fat is extremely low, so everything else under the skin, particularly muscles, can be seen in great detail. Bodybuilders do this to really show off muscle definition, particularly prior to competitions. But the thing is: It's extremely hard to gain and particularly maintain that much muscle naturally while having such a low body fat percentage. The only way to reach body fat percentage that low is to be in a calorie deficit, which inevitably eats away at the muscles no matter how much you train, unless you are on PEDs. This is a particularly clear sign if the guy is always extremely ripped, not just temporarily for a competition.

6) Gynecomastia: In other words, grown and "drooping" of the nipple area, which is usually a direct consequence of steroids. If the nipples are bulging or are pointing almost directly down, it's an almost certain sign.

7) Distended belly, ie. "palumboism": If the bodybuilder is "ripped" and has a noticeably large belly, it's an almost certain sign of using human growth hormone and, probably, other substances. Extremely low fat percentage (which is what makes you look "ripped") and a large belly don't usually occur together naturally. Natties who are ripped will almost always have a very prominent hourglass shape with an extremely lean, even inwards belly area (because of the lack of fat), and will be able to produce a huge "vacuum" in that area.

Natties can also have big bellies, but in that case they are pretty much never "ripped". They will look like having a high fat percentage everywhere, because they are exactly that, ie. fat. Abdomen muscles will not be visible and it's usually just a big round ball as a belly. Muscles elsewhere will also not be clearly delineated and will be just big bulges covered in fat tissue. (That being said, this look is in no way a guarantee of a nattie. Many fat bodybuilders use PEDs.)

Thursday, October 16, 2025

Is the Equation Group a "white hat", "black hat" or "grey hat" team?

The "Equation Group" is an unofficial name (invented by Kaspersky Lab) for one of the most notable and advanced team of hackers and software (and possibly hardware) developers in the world. While never officially recognized (for rather obvious reasons), there's very strong and credible evidence that the members of this team are employees of (or at a minimum closely working for) the NSA, most likely as part of their Tailored Access Operations department (which actually is officially recognized.)

From what has been discovered of the work of this group, and also inferred from the Snowden leaks, it's extremely likely that the main purpose of this hacker group is to research and discover zero-day exploits in operating systems and all kinds of other software and hardware, and to develop programs and tools to use those exploits to hack into computers (and who knows what other tasks, by using the ability to hack into the computers of foreign governments and other organizations and people.)

As Kaspersky Lab and other researches have found, these are not just some script-kiddies doing this for fun and fame. Code that has been attributed to them tends to be extremely advanced, use very advanced techniques, and often contain zero-day exploits most likely found by the team themselves. From all that's known about them and their code, they are highly skilled and advanced hackers and software developers.

It is known that the NSA stockpiles these "zero-day exploits" that this team (and probably others) find, for their own uses, rather than disclose them to the software and hardware companies (such as Microsoft.)

There have been known cases of such zero-day exploits having been kept secret by the NSA for many years before they were found independently and patched, or discovered via one of their malware having been examined (most famously Stuxnet). Or, at least in a few cases, by having been themselves hacked!

Indeed, one would think that given the top-level competency, skill and professionalism of this team and the NSA in general, they would have some of the highest digital security in the world, making them pretty much impervious to being hacked themselves. Yet, that has turned out several times to not be the case.

Quite famously Edward Snowden leaked a ton of top-secret NSA documents to the public. Curiously, Snowden was not some kind of NSA employee with a very high security clearance who had been working for the agency for decades when he decided to go rogue. No, he was just an external contractor who had been working in that capacity for quite a short amount of time, and with no other affiliation with the NSA. Essentially, he was just an outsider, not a governmental worker, who had been given temporary access as an external contractor for some minor work. Yet, he had full access to top secret documents of the NSA that he could freely copy for himself without any restrictions, and leak to the public.

That was because, at least back then in 2013, the security and safety measures at NSA were astonishingly lax and poor. Even many private companies, even in 2013, had significantly stricter and stronger security measures than the NSA had. Indeed, Snowden had access to all those top-secret documents just because the sysadmins in charge of all the NSA computers were lazy and just granted everybody access to everything because of convenience. As incredible as it might sound, even if you were just a recently-hired external contractor for a minor job for the NSA, you were granted full access to almost everything pretty much without limits. And that's exactly why one of those temporary external contractors, Edward Snowden, got hold of those documents. It is exactly as incredible and crazy as it sounds.

Whether the NSA started implementing more safety measures after the Snowden leaks is unknown, but apparently even if they did, it wasn't enough because in 2016 another hacker group, who call themselves The Shadow Brokers, were able to hack the NSA's computers and steal many of the exploit software developed by the Equation Group. The latter might consist of some of the top hackers and developers in the world, but apparently even they were not immune to being hacked themselves. Or, at a minimum, the servers where their software was stored (which might actually not be a fault of theirs, depending on who within the NSA was tasked with developing and maintaining those servers. If it was the same admins that allowed Snowden to just access and copy the top-secret documents, who knows.)

Perhaps the most famous exploit software that they stole and leaked was one codenamed EternalBlue, which was an implementation of a zero-day exploit of Windows that allowed running code on any Windows computer remotely (by exploiting a bug in Window's implementation of the SMB protocol that existed at the time.) It became famous because that code was used to create the infamous WannaCry ransomware, and later the (perhaps somewhat less famous) NotPetya, which caused even more damage.

There's evidence to show that the NSA had sat on (and probably used) that EternalBlue exploit for at least five years before it was stolen and leaked, allowing Microsoft to become aware of the bug and patch it. If it hadn't been stolen, it would have probably been gone unpatched for several more years.

Unsurprisingly, Microsoft issued severe criticism of this "stockpiling of zero-day exploits" by the NSA, as it keeps regular citizens vulnerable to exploits that have been found but are deliberately being undisclosed. The amount of damage caused by the several malware that were using EternalBlue is estimated to be at least 1 billion dollars.

Anyway, given all of this, an interesting question arises: Can the Equation Group be classified as "white hat", "black hat" or "grey hat" hackers?

The term "white hat hacker" is used to describe a hacker who tries to find hacks, exploits and vulnerabilities in software and hardware with the full intent of disclosing them to the manufacturers as quickly as possible, and with zero intent of abusing those exploits himself. Usually he will inform the manufacturers well in advance before disclosing the vulnerability to the public, to give the manufacturers time to patch their systems. These hackers try to always remain within legal limits. Many "white hat" hackers are actually outright employed by companies to find vulnerabilities in their own systems, and thus are doing it with full permission (and even paid for it.)

The term "black hat hacker" is, rather obviously, used to describe the opposite: In other words, a hacker who tries to find these vulnerabilities in order to either exploit them himself, or to sell them in the hacker black market to others (useful zero-day exploits, especially those that allow full access to any computer system, are incredibly valuable in the black market, and could fetch a price of tens of thousands of dollars, or even more.)

The term "grey hat hacker" is a bit fuzzier, and the definition depends a bit on who you ask. One common definition is a hacker who has no intent to abuse the exploits he finds (nor sell them to anybody), but has no qualms about breaking the law in order to find them (for example illegally breaking into the computer system of a company, or even and individual person, in order to gain access to more information that could help find even more vulnerabilities.) Some "grey hat hackers" might have primarily good intentions and think of breaking the law (eg. by illegally breaking into computers) as justified for the greater good (ie. discovering and disclosing vulnerabilities.) Other such hackers might just do it for the thrill, even if they don't have any intention of actually abusing the vulnerabilities they find any further (other than eg. rummaging around in the servers of a company) but with no intent to disclose those vulnerabilities either. Maybe they do a bit like the NSA does, ie. "stockpiling" knowledge and vulnerabilities that they might discover.

"Grey hat" might also be used to describe a hacker who illegally exploits computer systems in order to achieve something that's deemed a good thing, even if doing so is illegal. For example, not to disclose the exploits itself, but to disclose some incriminating information about the company or person, such as evidence of a crime they have committed. A bit like modern-day "Robin Hoods" who go against the law in order to fight evil.

So, in light of all of this, is the Equation Group a "white hat", "black hat" or "grey hat" hacker team? I think arguments could be made for all three:

1) They are "white hat" hackers because what they are doing is not illegal, and they are doing it on behest of the government for national security, to combat foreign threats. By the very fact that it's not illegal, it's not against the law, they are "white hats". It's in essence no different from a company hiring a hacker to find vulnerabilities in their systems. Not disclosing the vulnerabilities is in essence no different from for example not disclosing the locations and activities of spies performing activities in foreign countries, which is acceptable for national security reasons.

2) They are "black hat" hackers because they illegally exploit systems and not disclosing vulnerabilities is unethical, irresponsible and puts people in danger (which might even be considered criminal negligence.) Their research of vulnerabilities is not done to help people, but quite explicitly to exploit those vulnerabilities. Just because they might not be prosecuted by the government doesn't mean they aren't actually breaking the law, it's merely the government looking the other way and excusing it as being "for national security" (just like their spies murdering enemies in foreign countries.) Being endorsed by the government doesn't make them any less of "black hat" hackers, it simply makes them "government black hat" hackers.

3) They are "grey hat" hackers because even if what they are doing might be technically illegal, or at least ethically questionable, they are doing it for a good goal: That of protecting their country from foreign threats. There is no evidence that they are using these exploits for abuse their own citizens and compatriots. They are using these exploits to protect their compatriots. Even if the government sometimes might use these hacks to abuse their own citizens, that's most likely not the fault of the hackers themselves who discovered these vulnerabilities. It's very possible that they don't even know what their software is being used for in great detail. They may well have good intentions behind their work, ie. help protect their own country. 

Monday, September 22, 2025

How the Microsoft Kinect should have been marketed

Microsoft spent, as far as I know, over a hundred million dollars in merely advertising the Kinect when they were developing it, and succeeded in creating a massive amount of hype around it. It was being pushed and showcased constantly at E3 and other events, celebrities endorsed it, big famous game development companies and publishers endorsed it, and such a huge hype formed around it that its publication became an outright massive party event at many stores in America, with huge countdown clocks (very reminiscent of New Year countdown clocks) and massive crowds cheering and celebrating outside.


Unlike many people might think, the Kinect (both versions) actually sold surprisingly many units: Microsoft reports having sold about 30 million units of the original Kinect (for the Xbox 360) alone. This is actually more than some famous consoles sold during their lifetimes (such as the original Xbox, and the Nintendo GameCube.) In fact, at the time of its publication it broke the Guinness World Record for the fastest-selling consumer electronic device (by selling 8 million units in its first 60 days.)

(This doesn't mean that Microsoft got a huge profit from it, though. It is my understanding that they were selling the device at a loss, like is so common with these things, hoping that they would recuperate the losses with game sales.) 

However, regardless of the enormous sales numbers, the device turned out to be quite a flop in the end.

Its main problem is that it greatly overpromised but underdelivered. It promised to completely revolutionize gaming and to be this absolutely marvelous form of control, with eventually a massive library of games that are extremely immersive and fun to play.

Yet, the device was significantly less accurate than shown and promised in the pre-release demos (which promised, for example, that it could track the movement of individual fingers, which it absolutely could not), it was annoying to use, it only made most controls clunkier and needlessly difficult, and the vast majority of games were significantly simpler and more boring than normal games.

While about a hundred Kinect-only games were ever published, both game developers and the users quickly lost interest. It just didn't live up to any of the hype and promises, and it was a slog to use and to develop for. People quickly moved back to the regular controller (even though according to the Kinect marketing that controller is a "barrier" and limits how you can play games. Oh, the irony.)

Microsoft tried really, really hard to keep the device alive by bundling an updated better version of it with the Xbox One. It didn't help. An abysmal 21 games were ever developed for it, and that's it. While Microsoft tried to keep it alive for a few years, finally they relented and first made it optional and then just dropped support completely. It was declared a complete failure.

One of the major problems (besides the technical limitations such as low tracking resolution) that I see in the marketing is that Microsoft approached it completely wrong: They based their marketing 100% on the idea of the Kinect replacing the regular game controller. In other words, Kinect games would be controlled with the Kinect and nothing else, that's it. Microsoft not only envisioned the Kinect to be better than the regular controller (an idea that turned out to be massively naive), but also that it would be the exclusive control system for its games.

Indeed, from what I have seen 100% of their marketing was based on that complete replacement notion. As it turns out, that didn't pan out at all.

I believe that Microsoft should have approached the Kinect marketing and development in a different way: Rather than being a complete replacement of the normal controller, it could work to enhance its functionality. In other words, while there could still be games that used the Kinect only (such as dancing games), the main use case would be to use the Kinect in addition to and at the same time as the normal controller. In other words, the Kinect would add head and hand gestures to the repertoire of controls.

The most obvious such enhancement, something that would truly add to the gaming experience, would be head tracking: For example in a flight simulator you control the plane with the controller as normal, but you can look around by turning your head. This kind of head tracking systems are available for PC. The Kinect could have brought it to the console.

Imagine such a demo in one of those E3 conferences or wherever: The person on stage is controlling an airplane or spaceship while seated using a controller, and then turns his head a bit to different sides to have the in-game view likewise turn (just like head-trackers work on PC).

Also hand gestures could be added to the repertoire of controls. For example the player could reach with his hand and turn a lever, or press some buttons, while still controlling the vehicle with the controller. Imagine that kind of demo: The demonstrator is driving a car with the controller, looking around by turning his head, and then he moves his right hand to the side, makes a motion forward, and in the game the gear lever shifts forward to select the next gear.

Now that would have been immersive. I would imagine that the spectators would have been excited about the prospect.

(I'm not saying that the Kinect would have become a success if it had been developed and marketed like this, but I think that it would have at least had better chances.) 

Sunday, September 21, 2025

Microsoft's White Elephant: The Kinect


Those who never owned an Xbox 360, or those who did, but were never really interested in, nor followed all the hype that Microsoft created around the Kinect, might find it a bit surprising, given how little impact the Kinect had on video gaming, but this device was absolutely massively advertised and pushed by Microsoft back in the day, with borderline outrageous promises and hype. And we are talking about massive promotional campaigns.

The original slogan for the Kinect was "You Are The Controller". The initial narrative, prior to the Kinect's launch (and a bit after that), was that the traditional controller was a "barrier", a very limited form of control that severely limited possibilities. According to the marketing campaigns, "Kinect will change living room entertainment forever".

Microsoft's promotional demonstrations at E3 2009, and several subsequent ones, promised absolutely incredible real-time interactivity. (Given that the actual published Kinect turned out to be enormously less accurate and powerful than advertised leads me to believe that those E3 demonstrations were fully scripted, running pre-recorded animations, rather than being real-time live captures of the movements of the performer on stage.)

Among the things that were promised (with live demonstrations, allegedly in real-time, although as said, I have my doubts) were:

- Accurate full-motion capture of the entire body, with the in-game character following the position and movement of every limb and head very accurately. In one demo this allowed full control of a character wielding a lightsaber, to fight against hordes of enemies, with accurate movements and all kinds of maneuvers (such as force pushes, etc.)

- Moreover, the detection would be so accurate as to allow very precise maneuvering, allowing very small, precise and subtle movements, such as hand gestures, to accurately control something. This included things like opening and closing one's hand, or even moving individual fingers, and manipulating in-game objects with great precision (to even the millimeter range).

- Using the traditional controller would become essentially obsolete, as everything would be usable with the Kinect alone, using gestures and voice. In fact, it was promised that many things would actually become easier with the Kinect than with the controller, especially thanks to the smart voice recognition system. (For example, not only could you make the Xbox 360 play music by saying "xbox, play some music", but you could moreover specify a particular song, an artist, or a music genre, for instance, and the system would quickly find songs matching the specified parameters.)

- Video chat with remote players would be possible, easy, and practical. (In fact, the Kinect could even follow the user's position so as to keep him or her always centered on the view.)

- The Kinect would have full facial and shape recognition, distinguishing between different users, and being able to track the position of each user, and even being able to scan objects of a certain shape, such as a scateboard or a piece of paper, in real time. In one demo, for instance, a player draws a picture on a paper, shows it to the Kinect and "hands it over" to the in-game character, and this character reaches and grabs the paper, which now has the same picture in-game (which the Kinect, at least allegedly, scanned in real-time from the paper using its camera.) The Kinect is able to see that the paper is coming closer, and thus the game character can react to it in real-time, reaching his hand and "grabbing" the paper.

Microsoft got some really big name people to promote the Kinect at some of their E3 presentations, such as Steven Spielberg himself. Several big-name game companies also announced full Kinect support in many of their future games and game franchises, promising significant improvements in gameplay and immersion.

The first launch of the Kinect was made quite a massive spectacle in itself, with tons of money poured into it. Microsoft really, really pushed the Kinect to be a complete revolution of video gaming. A completely new form of control, of playing games, that would make the old systems obsolete, antiquated and limited. (Does this sound familiar? *ahem* VR *ahem*)

Of course reality turned out to be quite a letdown, and the massive hyping campaign to be completely out of proportions. The camera image resolution as well as the framerate of the final retail version of the Kinect was but a fraction of what was promised (something that Microsoft directly admitted a bit prior to publication, citing cost and technical problems both on the Kinect side and the Xbox 360 side), affecting most of the promised features. Motion detection was much poorer than promised, facial recognition was almost non-existent and extremely flawed, as well as the promised ability to scan objects (such as pictures drawn on paper) being likewise pretty much non-existent. Accurately scanning the entire body of a user and replicating it on screen was likewise unrealistic.

I do not know if the Kinect would have worked as promised if it had the technical specifications originally planned for it (both in terms of camera resolution and capture framerate), but at the nerfed specs it was finally published it made the system almost unusable. Rather than replacing the regular controller, and being at least as fluent as, if not even more fluent than it, it was a nightmare to use. Just navigating the home screen, or the main menu of any game, using gestures, was often a pain. Very inaccurate and inconvenient. Most games were unable to accurately detect but the broadest of gestures (even though the E3 demos had promised the Kinect to be able to detect even minor gestures, such as opening and closing one's hand, or even the position of individual fingers), and this made even the simple act of navigating a menu very inaccurate and inconvenient. (In fact, many games opted to skip even trying to detect hand gestures, and implemented the simpler method of just broadly detecting where the user's hand is, and if the user keeps their hand on top of a button for long enough, the game would then activate that button. Needless to say, this isn't the most convenient and efficient or fastest way, nor the most accurate way, of navigating a menu.)

Needless to say, this was quite a big disappointment, both for users and for game developers. Neither of which got the wondrous new form of control that was promised.

Even so, Microsoft still tried to push the Kinect as the next big thing, and induced many game companies to make games for it. Some developers did indeed make Kinect games, even Kinect-only games, especially during its first few years. However, regardless of how much Microsoft pushed the platform, the total number of Kinect games is quite low. Wikipedia lists the Xbox 360 having (at least) 1233 games in total (although the real number is probably a bit higher, as Wikipedia doesn't necessarily list the most obscure games ever released for the console), and from those, only 108 are Kinect-only (with an additional 49 games having optional Kinect support).

108 games is not exactly an abysmally low number of games, but it's still pretty low, considering the success of the Xbox 360 console itself. (Also consider that a good portion of those Kinect-only games are dancing games, which isn't exactly a very popular genre.) The number of games for the Kinect is relatively low, considering how much Microsoft promoted the system.

One would think that after the disappointment that the Kinect was, as it didn't deliver almost any of its promises, and neither the users nor game developers were exactly thrilled about it, Microsoft would have, after a couple of years, just abandoned it and let it die a natural death. But no. For some reason Microsoft was obsessed with the Kinect, for many years to come. So much so that when they designed their next-generation console, the Xbox One (which was published almost exactly 3 years after the original Kinect), they made a new "improved" version of the Kinect for it. They wanted to push it so hard into the market that they actually made it a mandatory peripheral for the Xbox One. Not only would every single console come with the new Kinect bundled with it, but moreover the console wouldn't be usable at all without the Kinect! The Kinect was a mandatory peripheral to just use the console. No Kinect, and the console would refuse to even work!

Due to the massive backlash caused by this announcement, Microsoft reversed that decision just prior to launch, and allowed the console to be used without the Kinect. However, the Kinect would still be bundled with every Xbox One. You couldn't buy one without the other. (It wasn't but almost a year later that Microsoft finally started selling Xbox One's without the Kinect. At about $100 cheaper thanks to that.)

I understand what Microsoft was trying to do: The problem with the Xbox 360 Kinect was that only a fraction of users had it, and thus it wasn't very enticing for game developers to make games for it. However, now that every single user of the Xbox One had a Kinect for sure, that would certainly give incentives to game developers to support it. (After all, that's one of the core ideas of game consoles: Every console owner has the exact same hardware, and this makes the life of game developers much easier. If every console owner has a Kinect, there shouldn't be any problem in adding Kinect support to a game.)

It didn't help. Users still weren't interested in the Kinect, and in fact, the Kinect making the system about $100 more expensive hurt sales of the system quite badly. Perhaps in a vacuum it would have been ok, but the Xbox One had one ginormous adversary at the exact same time: The PS4. Which was selling like hotcakes, while the Xbox One, with its $100 higher price tag, was suffering.

When Microsoft finally started selling the console without the Kinect, its sales figures started improving significantly. (They never reached those of the PS4, but were still significantly better, making the console actually viable.)

Three years after the launch of the Xbox One, Microsoft finally accepted the reality that the Kinect was a completely dead piece of hardware that nobody was interested in. The users weren't interested in it, and game developers weren't interested in it. (There's perhaps no better indication of this than the fact that even though the Xbox One has been on the market for four years, there exist only an abysmal 21 Kinect games for it.)

A nail in the coffin of the poor device was when Microsoft published the Xbox One S, which was a streamlined and slightly more efficient version, and it had no Kinect port at all. (A Kinect can still be connected to it, but it requires a separate USB adapter. The Kinect itself isn't an USB device, instead using its own custom port.)

And, of course, the absolutely final nail in the coffin is the fact that the new Xbox One X has no Kinect support at all. Microsoft has finally effectively declared the system dead for good.

Microsoft really pushed the Kinect to be the next big thing, and probably spent countless millions of dollars in its development and marketing, and did this well beyond what was reasonable. They should have accepted it as a failure in its first couple of years, and not even try to drag it into the Xbox One. The Kinect, however, became some kind of "self-imposed" White Elephant for Microsoft. (In common terminology a "white elephant" is an overly costly possession that cannot be disposed of. In this case, Microsoft imposed this status onto themselves, for at least six years, rather than just getting rid of it.)

Thursday, September 18, 2025

The strangest urban legend: Soda can tab collecting

During the entire history of humanity there have always been so-called "urban legends". The modern times are no different and, on the contrary, urban legends have only become even more easily widespread thanks to the proliferation of newspapers, magazines, radio and TV, and quite obviously with the internet it skyrocketed: Where centuries ago it could take years for an urban legend to spread by any significant amount, and decades ago it could take years, in the era of the internet it could take mere days.

Some urban legends are completely harmless and innocuous, as they just make people believe a silly thing that nevertheless doesn't affect their lives or how they behave. Then there are the more harmful urban legends that actually do affect how people behave and what they do, sometimes even in dangerous or detrimental ways. There are also urban legends that have spawned entire cults and conspiracy theories.

Then there are the stranger urban legends that have had a huge amount of influence among people who believe it (and usually refuse to believe otherwise, no matter how much counter-evidence is presented to them.)

One of the strangest ones is the urban legend that collecting soda can tabs and sending them somewhere will help fund wheelchairs or medical procedures for elderly people. Not the cans themselves, but just the tabs.

This urban legend has existed for many decades, starting at least in the 1980's, probably even the 1970's. It might be not as widespread today anymore, but it was still going strong in the 1990's.

Incredibly, a very strange "quasi-cult" formed around this urban legend: Not only would people who believed it spend copious amounts of time detaching and collecting these soda can tabs (again, just the tabs, not the cans themselves) and sending them to somewhere where they collect them, but entire voluntary transport supply chains formed in many countries.

Indeed, there were people who volunteered their time to collect these thousands and thousands of aluminum tabs from some town, and transport them somewhere else, to someone else who got supplied by several such people, and who would then themselves further transport them to the next person up the chain. Entire supply networks formed, especially in the 80's and 90's, in many countries, all on a voluntary basis.

Many investigative journalists have tried to find out where exactly these soda can tabs end up, and it's always the same story: It's always a dead end. They interview these people and ask who they are transporting the tabs to, and it's always someone else in the supply chain. The reporters follow this trail, often consisting of even a dozen different people, a dozen different "levels" in the network, and the result is always the same: Eventually they hit a dead end, where they just can't find the next person or entity in the supply chain. And, of course, nobody of the people they interview know where the tabs are going; it's always just the next person in the chain. They don't know, nor really care, where that person then transports them to.

There doesn't appear to be any kind of conspiracy or secret organization behind it. It appears that this kind of extremely large and complex endless supply chain, which always just ends in some kind of dead end where nobody knows where the next link is, has somehow arisen spontaneously, as a kind of emergent behavior: Volunteers have just showed up to become a part of the huge network of supplies, each bringing the tabs to someone else, until the next one in the chain just can't be found and nobody really knows about it.

Any authority or any other such person who knows about this strange supply chain always tells what should be rather obvious: There is no organization or entity that collects these soda can tabs and funds wheelchairs or anything. Such a thing doesn't exist. Nobody has ever got any wheelchair or medical procedure thanks to these tabs. There is no record anywhere of such an entity, or where the tabs ultimately end. For all they know they end buried in some warehouse somewhere ("waiting" to be transported somewhere), or they end up in a landfill, or something.

Or perhaps someone along the line just sells them to a recycling station for a few dollars worth of money, and keeps that money for himself (which, in fact, is the most likely end point of the entire chain.) In fact, many suspect, the last person in the chain who allegedly "gives it to someone else further in the chain" but doesn't really know who, is likely to just bring them to a recycling center and pocketing the money. Obviously those people are very hush-hush about it. Or it may be that next unknown anonymous person in the chain.

It's also pointed out that, rather obviously, the soda can tabs, no matter how many tons of them are collected, are not even nearly valuable enough to fund any wheelchairs or expensive medical procedures. Ironically, the soda cans themselves, from which these tabs are detached, would be more valuable, but even those could perhaps barely fund one wheelchair, if enough metric tons of them were collected. But, as mentioned, there is no organization that does this. Nobody has ever gotten any wheelchair or anything else thanks to these soda tabs.

Yet, the people who keep (or at least kept in the 90's and early 2000's) detaching and transporting these soda can tabs are True Believers. No amount of evidence will convince them otherwise. In fact, when they are presented the evidence that what they are doing is useless and nobody is getting any wheelchairs, they just say that "you don't know what you are talking about, so you should shut up."

This even though they literally have no idea where the tabs are going, what this supposed organization or entity is, or who has ever got a single wheelchair or medical procedure thanks to it. They just firmly believe they exist, even though they don't know who or where.

It's an extremely strange "quasi-cult". 

Wednesday, September 10, 2025

Why ChatGPT can be so addictive

If a mere 10 years ago you had told me, or pretty much anybody who's even slightly knowledgeable of technology and computers, that in less than 10 years we will get AIs that can have perfect conversations, in perfect fluent English (and even multiple other languages), understanding everything you say and ask and responding accordingly, in the exact same way as any human would do, and even better, writing perfect essays about the subject if you wanted, in any topic whatsoever, no matter if it's a highly technical scientific topic, or psychology, or pop culture, or anything, and that it could even compose very good and reasonable poetry, song lyrics, short stories, and other similar creative output, I would have laughed at you, and most other tech-savvy people would have laughed as well.

Who would have guessed a mere 10 years ago that almost perfectly fluent conversational AI, writing perfect fluent English on any topic you could imagine, both in terms of grammar as well as the substance of the contents, was less than a decade in the future.

But here we are, with exactly that.

And the thing is, these AIs, such as ChatGPT, can be incredibly addictive. But why?

The addictiveness does not come from the AI being helpful, eg. when asking it for some information or how to do something, or where to find something. That just makes it a glorified googling tool, telling you much faster and much more easily the answer to the thing you are looking for.

That's very useful, but it's not what makes it addictive. The addictiveness comes from just having casual conversations with the AI, rather than it having an actual goal or purpose. But again, why? 

There are several reasons:

1) You can have a completely fluent casual conversation with the AI like you could with any intelligent conversational friend. Whatever you want to talk about, the AI can respond to it intelligently and on topic.

2) The responses are usually very interesting. Sometimes it will tell you things you already knew, but probably in more detail than a normal person would. Oftentimes it may tell you things you didn't know, and which might be interesting tidbits of information.

3) The AI is like a really smart person who knows about every possible topic and is always happy to talk about it. Be it astrophysics, or mathematics, or computer science, or programming, or physics, or psychology, or sociology, or pop culture, or history, or a classic movie or song, or the culinary arts, or even just some random topic about some random subject, the AI pretty much always knows about the topic and can give you answers and information about it. There's no topic to which it will just answer "I don't really know much about that." (There are some inappropriate topics that AIs have been explicitly programmed to refuse to answer, or the running system might even outright stop them from answering, giving you an error message, but that's understandable.)

4) Moreover, it's able to adjust its level of conversation to your own knowledge and capabilities. It won't start throwing advanced mathematical formulas at you if your question is at the level of casual conversation, but will show you advanced technical stuff if you ask it to and show that you understand the topic.

5) And here's where it starts getting so addictive: The AI is always available, is always happy to talk with you, never gets tired, is never "not in the mood". It doesn't need to sleep, it doesn't need to take breaks, it doesn't get tired (physically or mentally), it never gets the feeling that it doesn't want to talk at that moment. It never tells you "not now."

6) Likewise it never gets exasperated, it never gets tired of you asking stuff, it never considers your questions stupid. It doesn't matter if your questions and commentary is on the PhD level or at the kindergarten level, it's always happy to have a conversation and will always do so in a positive tone, without getting exasperated, without using a patronizing tone.

7) And on a similar vein, the AI never gets offended, never feels insulted, never has its feelings hurt, no matter what you say, and never gets tired of speaking to you. You could continue on the same topic, repeating similar questions again and again, and it will never get tired of answering, always politely. You can use rude and derogatory language, and even insults, and its feelings will not get hurt. It will not start resenting you, nor start considering you an unlikeable person to avoid. You could even have an argument with it, where you stubbornly refuse to accept what it's saying, again and again, for hours on end, and the AI is literally incapable of becoming frustrated, tired or offended by your responses. It will never respond in kind, and will always retain a polite tone and will always be happy to keep answering your questions and objections, no matter what.

8) Most AIs, such as ChatGPT, are also programmed to be a bit of an agreeable "yes man": If you express a personal opinion on a subjective topic, it will tend to support your opinion and tell you what its strengths are. It will very rarely outright start arguing with you and tell you that you are wrong in your opinion (unless it's something that's clearly wrong, eg. very clearly and blatantly unscientific. But even in those cases it might show some understanding of where you are coming from, if you express your opinion reasonably enough.)

9) But, of course, if you explicitly ask it to present you both sides of a position, it will write a mini-essay on that, giving supporting arguments for them.

But it's precisely that aspect of it being a tireless and agreeable "person" who is always willing and knowledgeable to have a conversation about pretty much any topic, that can make it so addictive. Friends are not always available, friends don't necessarily know every topic, friends are not always agreeable, friends can get offended or tired, but ChatGPT is incapable of doing that. It's always there, ready to have a conversation.

That can be incredibly addictive. 

Sunday, August 31, 2025

HDR is a pain in the ass

Almost thirty years ago the PNG image format was developed as a significantly better and more modern alternative for lossless image storage than any of the existing ones (particularly GIF). It did many things right, and it compiled into one all the various features of images that almost none of the other existing formats supported simultaneously (eg. GIF only supports 256 colors, which is horrendous, and only a single-color transparency. TIFF and several other formats support almost all image format features but have extremely poor compression. And so on.)

However, there was one thing that the PNG format tried to do "right" but which ended up causing a ton of problems and become a huge pain in the ass for years and years to come, particularly when support for the format became widespread, including by web browsers. And that feature was support for a gamma correction setting.

Without going into the details of what gamma correction is (as this can be easily found online), it's an absolute swamp of complications, and with the standardization and widespread support for PNG it became a nightmare for at least a decade.

In the vast, vast majority of cases, particularly when using image files in web pages, people just want unmodified pixels: Black is black, white is white, 50% gray is 50% gray, and everything in between. Period. If a pixel has RGB values (10, 100, 200), then they want those values being used as-is (eg. in a web page), not modified in any way. Particularly, if you eg. specify a background or text color of RGB (10, 100, 200) in your web page, you definitely want that same value to visually match exactly when used in an image.

When PNG became widely supported and popular in web pages, its gamma specification caused a lot of problems. That's because when a gamma value is specified in the file format, a conforming viewer software (such as a web browser) will change those pixel values accordingly, thus making them look different. And the problem is that not only do different systems use different gamma values (most famously Windows and macOS used, maybe even still use, different values for gamma), but support for gamma correction varied among browsers, some of them supporting it others not.

"What's the problem? PNG supports a 'no gamma' setting. Just leave the gamma setting out. Problem solved." Except that the PNG standard, at least back then, specifically said that if gamma wasn't specified in the file, for the viewing software to assume a particular gamma (I think it was 2.2). This, too, caused a lot of problems because some web browser were standard-conformant in this regard, while others didn't apply any default gamma value at all. This meant that even if you left the gamma setting out of the PNG file, the image would still look different in different browsers.

This was a nightmare because many web pages assumed that images will be shown unmodified and thus colors in the image will match those used elsewhere in the page. 

I think that in later decades the situation has stabilized somewhat, but it can still raise its ugly head.

I feel that HDR currently is similar to the gamma issue in PNG files in that it, too, causes a lot of problems and is a real pain in the ass to deal with.

If you buy a "4k HDR" BluRay, most (if not all) of them assume that you have a HDR-capable BluRay player and television display. In other words, the BluRay will only contain an HDR version of the video. Most of them will not have a regular non-HDR version.

What happens if your TV does not support HDR (or the support is extremely poor), and your BluRay player does not support "un-HDR'ing" the video content? What happens is that the video will be much darker and with very wrong color tones, and look completely wrong.

This is the exact situation, at least currently, with the PlayStation 5: It can act as a BluRay player, and supports 4k HDR BluRay discs, but (at least as of writing this) it does not have any support for converting HDR video material to non-HDR video (eg. by clamping the ultra-bright pixels to the maximum non-HDR brightness) and will just send the HDR material to the TV as-is. If your TV does not support HDR (or it has been turned off because it makes the picture look like ass), the video will look horrendous, with completely wrong color tones, and much darker than it should be.

(It's a complete pain in the ass. If I want to watch such a BluRay disc, I need to go and turn HDR support on, after which the video will look acceptable, even though my TV has very poor-quality HDR support. But at least the color tones will be ok. Afterwards I need to go back and turn HDR off to make games look ok once again. This is such a nuisance that I have stopped buying 4k HDR BluRays completely, after the first three.)

More recently, I got another instance of the pain-in-the-ass that's HDR, and it happened with the recently released Nintendo Switch 2.

Said console supports HDR. As it turns out, if your TV supports HDR, the console will turn HDR on, and there's no option to turn it off, anywhere. (There's one setting that claims to to turn it off for games, but it does nothing.)

I didn't realize this and wondered for some weeks why the picture looks like ass. It's a bit too bright, everything is a bit too washed.out, with the brightest pixels just looking... I don't know... glitched somehow. The thing is that I had several points of direct comparison: The console's own display, as well as the original Switch: In those, the picture looks just fine. However, on my TV the picture of the Switch 2 looks too washed-out, too low-contrast, too bright. And this even though I had adjusted the HDR brightness in the console's settings.

One day this was bothering me so much that I started browsing the TV's own menus, until I found buried deep within layers of settings one that turned HDR support completely off for that particular HDMI input.

And what do you know, the picture from the Switch 2 immediately looked perfect! Rich colors, rich saturation, good contrast, just like on the console's own screen.

The most annoying part of all of this is that, as mentioned, it's literally not possible to reach this state using the console's own system settings. If your TV tells it that it supports HDR, it will use HDR, and there's nothing you can do in the console itself to avoid that. You have to literally turn HDR support off in the settings of your TV to make the console stop using it.

The PlayStation 5 does have a system setting that turns HDR completely off from the console itself. The Switch 2 does not have such a setting. It's really annoying.

The entire HDR thing is a real pain in the ass. So far it has only caused problems without any benefits, at least to me. 

Thursday, August 14, 2025

The complex nature of video game ports/rereleases/remasters

Sometimes video game developers/publishers will take a very popular game of theirs that was released many years prior, and re-release it, often for a next-gen platform (especially if talking about consoles).

Sometimes the game will be completely identical to the original, simply ported to the newer hardware. This can be particularly relevant if a newer version of a console isn't compatible with the previous versions, allowing people who own the newer console but have never owned the older one to experience the game. (On the PC side this can be the case with very old games that are difficult if not impossible to run in modern Windows, at least not without emulation and thus probably not using a copy that has been legally purchased.)

Othertimes the developers will also take the opportunity to enhance the game in some manner, improving the graphics and framerate, perhaps remaking the menus, and perhaps polishing some details (such as the controls).

Sometimes these re-releases can be absolutely awesome. Othertimes not so much, and feel more like cheap cash grabs.

Ironically, there's at least one game that's actually an example of both: The 2013 game The Last of Us.

The game was originally released for the PlayStation 3 in June of 2013, and was an exclusive for that console. Only owners of that particular console could play it.

This was, perhaps, a bit poorly timed because it was just a few months before the release of the PlayStation 4 (which happened on November of that same year).

However, the developers announced that an enhanced PlayStation 4 version would be made as well, and it was published on July of 2014, with the name "The Last of Us Remastered".

Rather than just going the lazy way of releasing the exact same game for both platforms, the PlayStation 4 version was indeed remastered with a higher resolution, better graphics, and higher framerate, and it arguably looked really good on that console.

From the players' point of view this was fantastic: Even people who never owned a PS3 but did buy the PS4 could experience the highly acclaimed game, rather than it being relegated as an exclusive to an old console (which is way too common). This is, arguably, one of the best re-releases/remasters ever made, not just in terms of the improvements but more importantly in terms of allowing gamers to experience the game who wouldn't have otherwise.

Well, quite ironically, the developers later decided to make the same game also one of the worst examples of useless or even predatory "re-releases". From one of the most fantastic examples, to one of the worst.

How? By re-releasing a somewhat "enhanced" version exclusively for the PlayStation 5 and Windows in 2022, with the name "The Last of Us Part I". The exact same game, again, with somewhat enhanced graphics for the next generation of consoles and PC.

Ok, but what makes that "one of the worst" examples of bad re-releases? The fact that it was sold at the full price of US$70, even for those who already own the PS4 version.

Mind you: "The Last of Us Remastered" for the PS4 is still perfectly playable on the PS5. It's not like PS5 owners who don't own the PS4 cannot play and thus experience it.

It was not published as some kind of "upgrade pack" for $10, like is somewhat common. It was released as its own separate game for full price, on a platform that's still completely capable of running the PS4 version. And this was, in fact, a common criticism among reviewers (both journalists and players.)

Of course this is not even the worst example, just one of the worst. There are other games that could be argued to be even worse, such as the game "Until Dawn", originally for the PS4, later re-released for the PS5 with somewhat enhanced graphics, at full price. While, once again, the original is still completely playable on the PS5.

Wednesday, August 6, 2025

"Dollars" vs "cents" notation confusion in America

There's a rather infamous recorded phone call, from maybe 20 years or so ago, where a Verizon customer calls to customer support to complain that their material advertised a certain cellphone internet connectivity plan to cost ".002 cents per kilobyte", but he was charged 0.002 dollars (ie 0.2 cents) per kilobyte.

It's quite clear that the ad meant to say "$0.002 per kilobyte", but whoever had written the ad had instead written ".002c per kilobyte" (or ".002 cents per kilobyte", I'm not sure as I have not seen the ad). (It's also evident from the context that the caller knew this but wanted to deliberately challenge Verizon for their mistake in the ad, as false advertisement is potentially illegal.)

I got reminded of this when I recently watched a video by someone who, among other things, explained how much money one can get in ad revenue from YouTube videos. He explains that his best-earning long form video has earned him "6.33 dollars per thousand views", while his best-earning shorts video has earned him "about 20 cents per thousand views". Crucially, while saying this he is writing these numbers, and what does he write? This:


In other words, he says "twenty cents", but rather than write "$0.20" or, alternatively, "20 c", he writes "0.20 c".

Obviously anybody who understands the basics of arithmetic knows that "0.20 c" is not "20 cents". After all, you can literally read what it says: "zero point two zero cents", which rather obviously is not the same thing as "twenty cents". It should be obvious to anybody that "0.20 c" is a fraction of a cent, not twenty entire cents (in particular, it's one fifth of a cent). The correct notation would be "$0.20", ie. a fraction of a dollar (one fifth).

This confusion seems surprisingly common in the United States in particular, even among people who are otherwise quite smart and should know better. But what causes this?

Lack of education, sure, but what exactly makes them believe this? Why do they believe this rather peculiar thing?

I think that we can get a hint from that phone call to Verizon. During that phone call the customer support person, when explicitly asked, very explicitly and clearly stated that ".002 cents" and ".002 dollars" mean the same thing. When later in the call the manager took over the call, he said the exact same thing.

Part of this confusion seems to indeed be the belief that, for example, "20 cents", "0.20 cents" and "0.20 dollars" all mean the same thing. What I believe is happening is that these people, for some reason, think that these are some kind of alternative notations to express the same thing. They might not be able to explain why there are so many notations to express the same thing, but I would imagine that if asked they would guess that it's just a custom, a tradition, or something like that. After all, there are many other quantities that can be expressed in different ways, yet mean the same thing.

It gives credibility to this hypothesis that, also, in that phone call to Verizon, the customer support person repeatedly says that the plan costs "point zero zero two per kilobyte", without mentioning the unit. Every time she says that, the customer explicitly asks "point zero zero two what?" and she clearly hesitates, and then says "cents". Which, of course, is the wrong answer, as it should be "dollars". But she doesn't seem to understand the difference.

What I believe happened there (and is happening with most Americans who have this same confusion) is that they indeed believe that something like "0.002", or ".002", in the context of money, is just a notation for "cents", all by itself. That if you want to write an amount of "cents", you use a dot and then the cents amount. Like, for example, if you wanted to write "20 cents", you would write a dot (perhaps preceded by a zero) and then the "20", thus "0.20" all in itself meaning "20 cents". And if you wanted to clarify that it indeed is cents, you just add the "¢" at the end.

They seem to have a fundamental misunderstanding of what the decimal point notation means and signifies, and appear to believe that it's just a special notation to indicate cents (and, thus, that "20 cents" and "0.20 cents" are just two alternative ways to write the same thing.)

Of course the critics are right that this ultimately stems from a lack of education: The education system has not taught people well enough the decimal system and how to use it. Most Americans have learned it properly, but then there are those who have fallen between the cracks and haven't got the proper education on the decimal system and arithmetic in general.

Sunday, August 3, 2025

How the Voldemort vs. Harry final fight should have actually been depicted in the movie

The movie adaptation of the final book in the Harry Potter series, Deathly Hallows: Part 2, makes the final fight between Harry and Voldemort flashy but confusing, leaving the viewers completely unclear about what exactly is happening and why, and does not convey at all the lore in the source material.

How the end to the final fight is depicted in the movie is as follows:

1) Voldemort and Harry cast some unspecified spells at each other, being pretty much a stalemate.


2) Meanwhile elsewhere Neville kills Nagini, which is the last of Voldemort's horcruxes.


3) Voldemort appears to be greatly weakened by this, so much so that his spell just fizzles out, at the same time as Harry's.

 

4) Voldemort is shown as greatly weakened, but he still casts another unspecified spell, and Harry responds with also an unspecified spell.


5) However, Voldemort's spell quickly fades out, and he looks completely powerless, looking at his Elder Wand with a puzzled or perhaps defeated look, maybe not understanding why it's not working, maybe realizing that it has abandoned him, or maybe just horrified at having just lost all of his powers. Harry's spell also fizzles out; it doesn't touch Voldemort.

6) Harry takes the opportunity to cast a new spell. He doesn't say anything but from its effect it's clear it's an expelliarmus, the disarming spell. 

 7) Voldemort gets disarmed and he looks completely powerless. The Elder Wand flies to Harry.

8) Voldemort starts disintegrating.


So what is depicted in the movie it looks like Neville destroying Nagini, Voldemort's last horcrux, completely zapped him of all power, and regardless of making a last but very powerless effort, he gets easily disarmed by Harry, and then just disintegrates, all of his power and life force having been destroyed.

In other words, it was, in fact, Neville who killed Voldemort (even if a bit indirectly) by destroying his last source of power, and Harry did nothing but just disarm him right before he disintegrated.

However, that's not at all what happened in the books.

What actually happened in the books is that, while Neville did kill Nagini, making Voldemort completely mortal, that's not what destroyed him. What destroyed him was that he cast the killing curse at Harry, who in turn immediately cast the disarming spell, and because the Elder Wand refused to destroy its own master (who via a contrived set of circumstances happened to be Harry Potter), Voldemort's killing curse rebounded back from Harry's spell and hit Voldemort himself, who died of it.

In other words, Voldemort destroyed himself with his own killing curse spell, by having it reflected back, because the Elder Wand refused to kill Harry (its master at that point).

This isn't conveyed at all in the movie.

One way this could have been depicted better and more clearly in the movie would be, for example:

When Neville destroys Nagini, Voldemort (who isn't at that very moment casting anything) looks shocked and distraught for a few seconds, then his shock turns into anger and extreme rage, and he casts the killing curse at Harry, saying it out loud (for dramatic effect the movie could show this in slow motion or in another similar manner), and Harry immediately responds with the disarming spell (also spelling it out explicitly, to make it clear which spell he is casting.)

Maybe after a second or two of the two spell effects colliding with each other, the movie clearly depicts Voldemort's spell rebounding and reflecting from Harry's spell, going back to Voldemort and very visibly hitting him. Voldemort looks at the Elder Wand in dismay, then at Harry, then his expression changes to shock when he realizes and understands, at least at some level, what just happened. He looks again at his wand and shows an expression of despair and rage, but now Harry's new disarming spell knocks it off his hand, and he starts disintegrating.

Later, in the movie's epilogue, perhaps Harry himself could give a brief explanation of what happened: That the Elder Wand refused to kill its own master, he himself, and that Voldemort's killing curse rebounded and killing its caster. 

Thursday, July 31, 2025

Matt Parker (inadvertently) proves why algorithmic optimization is important

Many programmers in various fields, oftentimes even quite experienced programmers, have this notion and attitude that optimization is not really all that crucial in most situations. So what if a program takes, say, 2 seconds to run when it could run in 1 second? In 99.99% of cases that doesn't matter. The important thing is that it works and does what it's supposed to do.

Many will often quote Donald Knuth, who in a 1974 article wrote "premature optimization is the root of all evil" (completely misunderstanding what he actually meant), and interpret that as meaning that one should actually avoid program optimization like the plague, as if it were somehow some kind of disastrous detrimental practice (not that most of them could ever explain why. It just is, because.)

Some will also reference some (in)famous cases of absolutely horrendous code in very successful programs and games, the most famous of these cases being probably the video game Papers, Please by Lucas Pope, which source code apparently is so horrendous that it would make any professional programmer puke. Yet, the game is enormously popular and goes to show (at least according to these people) that the actual quality of the code doesn't matter, what matters is that it works. Who cares if the source code looks like absolute garbage and the game might take 5% of resources when it could take 2%? If it works, don't fix it! The game is great regardless.

Well, I would like to present a counter-argument. And this counter-example comes from the youtuber and maths popularizer Matt Parker, although inadvertently so.

For one of his videos, related to the game Wordle (where you have to guess a 5-letter word by entering guesses and it showing correct letters with colors) he wanted to find out if there are any groups of 5 words of length 5 that use unique letters, in other words 25 different letters in total.

To do this he wrote some absolutely horrendous Python code that read a comprehensive word list file and tried to find a combination of five such words.

It took his program an entire month to run!

Any experienced programmer, especially those who have experience on such algorithms, should have alarm bells ringing loudly at this point, as this sounds like something that could be quite trivially done in a few seconds, probably even in under a second. After all, the problem is extremely restricted in its constraints, which are laughably small: Just five-letter words (can discard every other word), which even in the largest English dictionaries should be just a few thousands of them, and checking if a word contains only unique letters (which ought to restrict the list of words to a small fraction, as all the other 5-letter words can be discarded from the search), and finding combinations of five such words that share no common letters.

And, indeed, the actual programmers among his viewers immediately took the challenge and quite quickly wrote programs in various languages that solved the problem in a fraction of a second.

So yes, indeed: His "I don't really care about optimization as long as it does what it's supposed to do" solution took one entire month to calculate something that could be solved in a fraction of a second. Even with an extremely sloppily written program using a slow language it should only take a few seconds.

This is not a question of "who cares if the program takes 10 seconds to calculate something that could be calculated in 5 seconds?"

This is a question of "calculating in one month something that could be calculated in less than a second."

Would you be willing to wait for one entire month for something that could be calculated in less than a second? Would you be using the "as long as it works" attitude in this case?

It's the perfect example of why algorithmic optimization can actually matter, even in unimportant personal hobby projects. It's the perfect example of something where "wasting" even a couple of days thinking about and optimizing the code would have saved him a month of waiting. It would have been actually useful in practice.

(And this is, in fact, what Donald Knuth meant in his paper. His unfortunate wording is being constantly misconstrued, especially since it's constantly being quote-mined and taken out of its context.)