Thursday, November 14, 2024

Sometimes even "star programmers" might not be so "stellar"

I once worked for a small gaming company. One of the lead programmers there was what could be called a "star" coder. A hacker. Not only had he implemented several dozen games in the company, but he was a regular in all kinds of video game "hackatons" (ie. competitions where small groups of developers create a video game in a day or two), and was quite famous not only among those circles but among other indie gaming companies here as well. Needless to say everybody in the company considered him not only a stellar coder, but outright crucial to the survival of the company.

There's a saying that goes like "never meet your heroes", which means that when you admire a celebrity or someone you don't personally know, if you ever were to get to actually know that person it may well reveal personality or other flaws that greatly mar your admiration.

In this case the saying could be adapted as "never look at the star programmer's code".

There are admired stellar programmers who genuinely write extremely good well-written code, who know what they are doing, who have an extremely good understanding of algorithms and programming techniques, both from the theoretical and the practical point of view.

Then there are "stellar" programmers who... let's say, only give the appearance of being so. They might be very prolific and produce impressive-looking results, but if you delve into what they have written and what their actual knowledge is, it might be less than impressive.

You see, even though we worked for the same company, we were doing completely separate projects and never worked on the same project, so I never got to see his code. With the exception of one time, where I briefly participated in the development of one particular game.

Turned out that this "stellar" programmer with his impressive resume... wasn't one of the best programmers I have ever met. Sure, he could write code, and he knew the programming languages he was using, but I quickly noticed how poor his understanding of many programming related concepts and algorithms was.

As an example, in that project there was a need for a random number generator and because of the programming language only floating point numbers were available. He had implemented an extremely poor-quality Lehmer RNG. When I checked what the period of the generator was, it was in the thousands. I'm not kidding. His RNG went through a few thousands of values before it started repeating.

When I mentioned this to him, he had no idea what I was talking about. He had literally no understanding of even the basics of random number generation, and had never even thought about things like the period of an RNG. He had never tested what the period of his RNG was, or in any way checked its quality. He was lucky that it just happened to not give egregiously obvious poor quality to the player of the game.

I suggested I implement a slightly better  32-bit LCG. While the quality of LCGs is also not brilliant, at least the period could be pumped from the few thousands to over 4 billion (and the quality can be somewhat improved by mixing up the higher bits with the lower bits.) However, to implement this 32-bit LCG when 32-bit floating points are the only numerical data type available required a bit of ingenuity.

Ingenuity that, as it turns out, he absolutely did not have. Not only did he have no idea what an LCG was, he had no ideas whatsoever about how to implement 32-bit integer arithmetic using 32-bit floats. When I explained to him how to do 32-bit multiplication using 16-bit integers (as floats can be used to handle 16-bit integers just fine) using the classical long multiplication algorithm, he had no idea what I was talking about.

This guy, who had gone to school as normal, had no idea how long multiplication works. And apparently wasn't even very interested when I tried to explain it to him. I don't know which was more appalling, his lack of knowledge, or his lack of interest.

Yeah. Never meet your heroes, nor read their code.

Friday, September 13, 2024

My prediction on PS5 Pro sales numbers

As of writing this blog post, September of 2024, Sony has recently announced their upgrade to the original PlayStation 5, the PlayStation 5 Pro.

The announcement was met with less than enthusiastic responses, primarily because of the announced launch prices, which most people don't see as being in any way justified by the less-than-impressive hardware improvements and lack of a disc drive. (The announcement did not mention a version of the console with one at all.)

The announced launch price of the console was US$700 / £700 / €800.

Contrast this with the launch price of the original disc-driveless PS5, US$399 / £359 / €399.

The jump in price is very significant, while the announced improvements in hardware are very moderate at best, with an announced 45% increased GPU performance, and almost no performance improvement in CPU nor RAM speed. Sony's presentation claimed an improvement of about 100% in hardware raytracing performance (ie. about double), but this is less than impressive taking into account the fact that the number of PS5 games supporting raytracing is abysmal (and even in most of those the visual improvements are not very notable.)

I am making the prediction here, in September of 2024, that the PS5 Pro will sell relatively poorly. Not abysmally poorly, but still quite significantly so (unless Sony changes something drastically to improve the situation.)

The original PS5 has sold approximately 60 million units as of writing this post.

I predict that, if the console is sold as it was announced, for its announced launch price, and Sony doesn't do anything to significantly improve the situation, the PS5 Pro will sell about one tenth of that, ie. less than 10 million units. Perhaps even significantly less (5 million or less.)

Existing PS5 users have little incentive to upgrade. Some definitely will (because there are always so-called "whales" who want the latest and best, and don't care about price), but only a small fraction.

People who will be buying the PS5 for the first time will have two choices, one costing almost double that of the other, and either via research or word-of-mouth they will probably know that the more expensive version does not really provide improvements significant enough to pay double the price, so they are likely to choose the cheaper option.

So, in summary, my prediction is that the PS5 Pro will sell about 5-10 million units in total, if sold as announced and Sony doesn't change things significantly. This compared to the 60 million base PS5 units sold.

Saturday, July 20, 2024

One of the funniest misconceptions often still taught today

Especially in certain countries (which isn't surprising because the spread of misconceptions is strongly tied to local word-of-mouth culture and customs) even to this day it is taught that a person loses most of their body heat through the head. Sometimes no exact figure is given, but when it is given, 80% seems strangely common. This is supposed to teach how important it is to wear a hat during cold weather, especially in sub-zero temperatures. It's also commonly repeated as a factoid even in other situations.

As you might have guessed, this is just a myth. In reality a person loses about 10% of their body heat through the head. Which isn't surprising because the surface of the head comprises about 10% of the total surface of the entire body. Which, of course, means that there's nothing special about the head compared to the rest of the body in this regard.

The interesting part is where this myth originates.

The most plausible origin of the myth is an experiment and study made by the United States military some time in the earlier-20th century. In this experiment they tested how much body heat is lost by military personnel wearing arctic suits in sub-zero temperatures. They measured that from all the body heat emitted by the soldier's body, about 45% was emitted from the head.

Which, of course, is a lot higher than 10%. Not nearly as high as the often-cited 80% figure, but still significantly higher.

The figure, however, starts making sense once you realize that when the test was performed, the army arctic suit didn't have any sort of warm headgear, so the soldiers were either not wearing any headgear at all, or only a helmet, which obviously provides very little protection against heat loss.

The 45% figure happens when wearing thick winter clothes without headgear. It doesn't indicate how much a naked person loses heat from their head.

And even then, it's just 45%, not the often-cited 80%.

As a result of this study the military started issuing warm protective headgear alongside the rest of the arctic suit. And, somehow, this result got widespread among the general public and changed via word of mouth until it reached the modern misconception.

Sunday, February 11, 2024

How do some (software) projects become vaporware?

"Vaporware" is a term generally used for a project that's announced by a company, which then takes an inordinate amount of time to complete, and might in fact never be completed even though years and years, sometimes decades, are spent on developing it. Sometimes the project does eventually end up being released; however, most often than not it turns out to be less than stellar, sub-par to various degrees, and gets critically panned and disliked. Very often this is because, probably due to its extremely long development time, its technology is antiquated by the time it gets published and may still contain a lot of missing or buggy features (no doubt having not been completely finished for the exact same reasons it was so immensely delayed in the first place.)

Hidden behind the layer of publicly announced "vaporware" is, however, an immensely bigger layer of what could be called "eternal projects". These are projects that have gone over any reasonable timelines by an order of magnitude, but which have never been announced publicly by the company (most likely because the company only wants to announce finished products, not future projects.) These are, essentially, "company-internal vaporware", normal vaporware, just not announced in advance to the public.

Projects (software and otherwise) that go nowhere are extremely commonplace in all kinds of technology companies. Someone presents an idea, development is started, but it turns out that for one reason or another the idea is actually infeasible, or would take way too much time and resources, or in practice isn't as good as it sounded on paper, so it's abandoned. Usually in the timespan of a few months, a year at most.

However, these "eternal projects", "company-internal vaporware" projects, may be developed for years and years and years, sometimes even for over a decade, with no end in sight, even though it never seems to be even close to be finished, nor usable in practice, or something that actually feels like a good full video game (if the project is one). Rather than being cancelled after a few months, they are just stubbornly continued for many years, even though it never seems to get even close to finished.

But why? And how does a project become this kind of "eternal project" (either publicly announced or unannounced)? Why don't companies just cancel such projects when it has become clear that they aren't really going anywhere?

Having had some personal experience with these things, I can speculate from my perspective about some of the reasons.

One of the major, perhaps most obvious, reasons is the sunk cost fallacy: People managing and running the project (and, in my experience, it most usually tends to be them, rather than the developers and engineers themselves) get into the strong mentality that the project actually is going somewhere, that a finished product is just around the corner, it just needs a little bit more of work, and it will finally be finished, the final breakthrough in the project will be achieved, and the finishing touches will be routine and smooth sailing from that point forward. And, of course, where the "sunk cost fallacy" kicks in is that they have this strong feeling that because so many years and so much money have already been spent on the project, it would be a huge loss and an enormous waste to just stop and cancel the project. The team has spent all this time, all this effort, all this work, and there is years and years worth of material that has been produced by the team, and they are so close to finally get a working result, that it would be unthinkable to throw all that work, effort and money to the trashcan.

These managers, however, often get blindsided and can't see that the project actually isn't going anywhere, and the existing work is just outright bad, and that the project should have been cancelled literally years prior and all the developers and engineers moved to some other more productive projects. The developers themselves can often see this, but they either don't really want to say anything, or of they do, their objections are dismissed.

But how does a project, like a software project, end up in this state? A state of being, effectively, eternally "stuck" in a state where it's seemingly being constantly developed but it's getting nowhere, it's not actually advancing towards a final product?

There may be many reasons for this, but one common one is a lack of a plan, a very exact vision of what the final product must be like, planned before development even started. In other words, the project started as a more generic idea and took the approach of "let's plan the details as we go". For example if the project is a video game, the plan might have been to create a game of a particular genre, with certain features... but that's it. All the actual concrete details are made up during development, "as we go." There is no precise and exact final picture of the end goal.

This usually ends up in a form of development where the development itself is used as a testing bed for ideas and features. The person or people running the project may come up with new features to try, new ideas, new mechanics, to be tested on the project currently in development, to see if they work. These new ideas might literally come on an almost daily basis. "Let's try this. Let's try that. Let's change this into that. Let's add this to that. Let's remove this. Let's re-add this previous removed feature." An endless stream of new features and changes to the existing project, which just accumulate and accumulate over the years, but without any concrete vision nor plan of the final product.

One reason why a project manager may engage in this is that he completely misunderstood what the "agile development" paradigm actually means and entails. (Yes, actual case from personal experience.)

Such a project manager might get completely blindsided and, ironically, completely unable to see the bigger picture. The fact that the project isn't actually going anywhere, and is nowhere even near to be finished, and that the current project is in complete shambles, a complete mess, when it comes to an actual good design, and for example if it's a video game, it's not anything that anybody would play or enjoy.

Bosses, CEO's and other higher-ups might also be blindsided by the project. Perhaps they are presented previews of the product, which present a picture of it that looks way better than what it actually is. Company-internal deceptive marketing of sorts. The higher-ups might seriously get the wrong impression that the product is better and more finished than it really is, and thus not stop it.

Or, in many cases, it's precisely the bosses and CEO's who are the ones engaging in the endless cycle of trying new features without a clear plan nor goal of the end product, unable to see that the project is a complete disaster and should have been cancelled years prior.

Sunday, January 14, 2024

Why too much exposition ruins movies and games

Many years ago I went a couple of times to an event organized by some university student group where you could be introduced to and play all kinds of tabletop board games. I thought it would be a good way to have fun and socialize, and perhaps even find interesting tabletop games.

One of the organizers there, however, pretty much ruined the entire thing for me. The reason for it was that it seemed like he just loved the sound of his own voice, and when he started introducing some new board game to a small group of interested players, he would just explain... and explain... and explain... and explain... and explain... endlessly. He would literally take like 15 minutes explaining and explaining some board game (that wasn't actually even all that hugely complicated; it's not like it was Warhammer or some other enormously complex game.) Rather than, you know, actually allowing people to learn by playing.

The problem was, of course, that such a huge info dump is impossible to follow and remember. It's completely useless to explain a complex board game for fifteen minutes because no person in existence can remember all of that at once, especially when they have absolutely no experience with the game itself, no context, and all they hear are words and more words disconnected from any actual hands-on playing experience. Thus, I would just doze off after a minute or two, and listen to the huge stream of meaningless word salad for 10+ minutes, completely bored out of my skull. Those 15 minutes could have been used to actually play the game and learn the concepts in that manner, one by one as they come up during the game. After what felt like an absolute eternity the game would finally be started, and almost nothing of what he explained helped at all to play the game, because nobody can remember all of that. It was literally completely wasted 15 minutes for absolutely no benefit. We would learn from the first 5 minutes of actually playing the game way more than during that 15-minute verbal diarrhea info dump.

Way too many video games, especially nowadays, commit this exact same mistake: Quite often during the very beginning parts of the game, before the player has had any chance of getting any hands-on experience about the game, it will throw textbox tutorial after textbox tutorial at the player, usually interrupting gameplay, and way too often either explaining complete trivialities, or showing an info dump that the player has zero chance of learning because there's too much information at once, completely disconnected from any actual hands-on playing experience (and thus the player has no way of connecting what the tutorial is saying to the actual gameplay, making it harder to remember.)

(When it comes to explaining completely trivial things, which is way too common especially in a certain type of Japanese RPG games, it almost feels like the developers have the mentality of "we went through all this trouble to implement a tutorial system, let's use it to the fullest, dammit!" and start throwing the most trivial things at the player, like how to click on a button or exit from a menu, which would be completely obvious to anybody without it having to be explained.)

Sometimes, however, this kind of needless exposition and explanations can also extend to storytelling itself, which thus can affect not only video games but also movies, TV series and even books.

Movies, especially those that are based on stories originally told in another medium (usually a book or a game), tend to be especially egregious in this regard.

One particularly notorious and aggravating example is the 2021 film adaptation of Dune.

The opening scene of the book doesn't happen until about 20 minutes into the movie. The first 20 minutes are nothing but boring exposition.

This is not how you tell a story! Frank Herbert, when he wrote the book, understood how to tell a story in an interesting manner, in a way that immediately engages the audience. You start with something that grabs your attention, picks your curiosity, excites your imagination.

You don't start with 20 minutes of exposition!

Clearly the scriptwriters of the movie did not understand this at all, and felt the average audience is so stupid that they need 20 minutes of exposition before they can "understand" what's going on. They apparently felt that if they just did what the book did, then the audience wouldn't understand and would be confused. They clearly didn't understand about good writing at all.

And this is, by far, not the only example, just one of the most egregious recent ones.

Monday, November 20, 2023

The one thing that Unreal Engine did right that Unity did wrong

When it comes to game engines used in triple-A games, the competition was quite fierce in the 1990's and early 2000's, but ultimately two engines became so immensely popular, both among big triple-A studios as well as many indie studios, that they have become almost ubiquitous and have the vast, vast majority of the market share among themselves, with other engines being left almost completely in the dust.

These game engines are, of course, Unreal Engine and Unity.

In terms of features and visual quality these engines are extremely similar, and are in a constant competition between themselves on which one can out-compete the other in terms of visuals and fancy "next gen" features.

However, for the longest time (more so in the past than today, but still to a large extent even to this day) these engines have had a rather different reputation among gamers.

Unity has always been considered a kind of "cheaper", "smaller" and, in a way, "worse" engine, while Unreal Engine has got this image of being a serious heavy-hitter for the truly massive and impressive multi-million-dollar budget triple-A games.

In fact, Unity has for quite a long time had this negative association with scammy asset-flip trash that gets dumped onto Steam and other digital stores just for a quick buck. It seems like it's almost too easy to make games (especially mediocre ones) with Unity, allowing scammers and opportunists to quickly make something that superficially looks like a game (and may even have fancy-looking screenshots) to try to fool people into buying them so they can essentially defraud them of money with a complete trash asset flip unstable game-in-name-only.

Likewise Unity has for the longest time had this image of being associated with free-to-play and very cheap small indie games, made by either individual people or extremely small indie studios.

In contrast, as mentioned earlier, Unreal Engine has always had this strong association with really big-budget massive high-quality triple-A titles. Almost all of the really big and famous game franchises seem to use it. Unreal Engine isn't really associated with small indie games nor scammy asset flips.

Is this because Unity is free to use and Unreal Engine is very expensive (and thus only affordable by big game studios)? No. Both have extremely generous usage licenses that allow using them for completely free up to surprisingly large amounts of revenue (and even after that you pay from your revenue, not upfront). Anybody can use either engine completely for free, no strings attached.

Is it because Unity is a very small and simple engine suitable only for small and simple games, while Unreal Engine is a massive engine supporting all the bells and whistles? Again, no. As also mentioned earlier, both are very modern and very complete in their support for modern gaming features, and are both capable of very similar graphics and other video game technologies. It is perfectly possible to do a full-on hundred-million-dollar-budget huge triple-A game with Unity, and a very small and cheap indie game with Unreal Engine.

So given how similar both engines are in terms of features, size, scope and usage licenses, how come Unity has got this reputation of being for small indie games and scammy asset flips, while Unreal Engine has got this image of being a "big boys" engine for the massive triple-A games?

This is because of one perhaps a bit surprising policy that the creators of the game engines have had for the longest time. And, more precisely, because the engines are polar opposites in terms of this particular policy.

You see, since pretty much the beginning and up to this day (although this was recently changed, if I remember correctly), the free version of Unity had the policy that any game made by it had to display the Unity splash screen when launching the game. Only a paid license of the engine would allow disabling the splash screen.

The developers of Unity probably thought that this would work as advertisement for their engine, in return for it being free to use. A bit like a small form of paid-by-ads (or, more precisely, by one ad in this case).

In contrast, Unreal Engine has the exact opposite policy: In other words, it's not allowed to show the Unreal Engine splash screen in your game unless you get a particular paid license for the engine. In other words, you need to get permission to show the Unreal Engine splash screen, else you can't show it.

Well, turns out that Unreal Engine, perhaps serendipitously or because of amazing foresight, had the better idea.

The reason why Unity is often associated with small crappy games and scammy asset flips is precisely because they all show the Unity splash screen when launched (it can't be disabled in the free version). Bigger triple-A games, however, usually disable the splash screen because it may not go well with their aesthetics of the game.

Thus there's a strong association between the Unity splash screen and the small crappy games.

In contrast, you almost exclusively see the Unreal Engine splash screen in huge triple-A games, and never in small indie games (where it's outright forbidden from being used, even if the game uses the engine), which is why the name is often associated with the former.

Thursday, August 17, 2023

How to browse the internet as safely and anonymously as possible

It doesn't really matter why one would want to browse the internet as anonymously and safely as possible, it is within everybody's rights to do so, if they so wish. The motivations behind it don't really matter, it's not anybody's business. There can be completely legit reasons why you want to do so, browsing the internet with complete anonymity, leaving no trace behind, and keeping your computer completely safe from any malicious software that you might encounter online.

Important note: No method can ever be 100% sureproof, with 0% of malicious actors, hackers, governments or other people getting hold of your PC and/or seeing what you are doing. If you connect your PC to the internet (and sometimes even if you don't, if it has any sort of wireless capabilities) you always take some risk.

That being said, following all these steps will significantly reduce such risks and make it extremely hard for any malicious actors or software from seeing what you are doing or getting some kind of access to your computer, and will make it much harder for any malware to invade your computer.

Another important note: Employing only one or two of these steps, while it already may add some safety, will not be sufficient. The more of these steps you use, the safer and more secure it will be.

1: Use a VPN

By this point it almost sounds like a cliche, but it does help: Using a VPN makes it significantly harder (although not 100% impossible) for anybody to connect what you are browsing as coming from your computer. It will (at least ostensibly) stop your internet service provider from seeing what you are browsing (because your ISP will only see an encrypted connection to some VPN server somewhere, not what you are actually connecting to at the end of the chain.)

Note that using a VPN will introduce significant lag to your internet connection (which is something VPN service providers will often lie about), so you might not want to have it constantly on, but only when you want to go private.

Also note that, as far as I know, there exists no good free-of-charge VPN services out there, so if you want to use one you'll have to buy a subscription. There's probably no (legal) way around this, but depending on your needs it may be worth it.

2: Use a virtual machine software

Way too few people know and understand how incredibly handy and versatile virtual machines are.

A virtual machine (such as VirtualBox or VMware) allows installing and running a second operating system in such a manner that it's completely encapsulated in its own hardware sandbox (and all of its files in its own directory in the host operating system). Modern processor architectures allow running a guest OS at pretty much effectively the same efficiency as a natively-installed OS.

There are many advantages in a virtual machine: Whatever you do inside the virtual machine stays within the virtual machine, and has no effect on your natively-installed host operating system. (There may exist "jailbreak" exploits for some virtual machines, but these are unlikely. And, as said earlier, no system can ever be 100% safe, you can only try to increase safety to the maximum you can.)

Additionally, a virtual machine allows effectively taking "snapshots" of the entire guest system, and later restore the entire thing to what it was at the time of this "snapshot". In other words, it's effectively an absolutely perfect 100% backup that will move time back and restore the system to exactly what it was before, bit-by-bit. If you ever want to undo something you have done inside the virtual machine, you can just restore this backup snapshot, and everything done after that will be gone. (The easiest way to take such a "snapshot" is to simply copy the directory where the virtual machine files are located somewhere else. You can then later copy it back, which will restore the guest system to what it was.)

Also, a virtual machine allows running Linux inside it, even if your natively-installed host OS is Windows. Linux in itself adds a layer of protection as it's less targeted and less vulnerable to attacks (eg. by trojans, viruses, etc.)

3: Use an encrypted partition in the virtual machine

When installing the guest operating system into a virtual machine, choose in the installer to use an encrypted partition. Most Linux distros offer this possibility in their installers (and if one doesn't, either choose a distro that does, or look up tutorials on how to make the partition encrypted.)

When the guest operating system has been installed in an encrypted partition inside the virtual machine, whatever you do inside the virtual machine will leave no recoverable trace anywhere in your hard drives / SSDs. Anything that saves anything to disk inside the virtual machine will be encrypted, leaving no recoverable trace behind. (Remember that simply deleting a file does not necessarily remove its bits from the storage device. Not even if you use some kind of "file shredder" application that tries to completely eliminate the original data by overwriting the file: In modern SSDs these overwrites may be written to a different location in the physical storage device. When the partition is encrypted to begin with, nothing will be written to the storage device unencrypted, and thus there will be no unencrypted trace of it anywhere.)

For the extra paranoid you might want to use an encrypted partition for your natively-installed host OS as well (be it Linux or Windows), and this too will add an extra layer of security, but it's up to you whether you want to go through this. Doing it inside the virtual machine is hassle-free.

4: (Optionally) use a Tor browser inside the virtual machine

While the Tor network is often associated with the "dark web" and all kind of illicit and illegal activities, it's not in principle designed for that, and it's a legitimate way to browse the internet anonymously, and can be used to browse the regular normal internet.

It shouldn't really be relied on by its own, without anything else, but in addition to all the above, it will add yet another layer of protection.

Note that Tor may be a form of communication that's alternative to VPN, so using both at the same time might not add one form of protection on top of the other. However, it may still be useful to use both at the same time, especially if you are going to use a normal web browser in addition to a Tor browser.

If you are going to use a regular web browser inside the virtual machine, it's recommended to use the "incognito mode" provided by the browser. This is not because it would add any security or anonymity (because it doesn't), but because it's a convenient way of erasing whatever your browsing left behind on your disk, like tracking cookies, scripts, etc. If any dubious website attempts to do something to your system (even if it's just the guest system running inside the virtual machine), this adds a layer of safety in that the browser will remove all of what that website did when the browser window is closed. This is a very mild form of security, but it still doesn't hurt to use it. This is much more convenient than doing a full virtual machine snapshot restore.

Even with regular web browsers, not all browsers are equal. Some browsers have been specifically fine-tuned to make things like fingerprinting and tracking by websites as difficult as possible. An example of such web browser (and widely preferred by privacy-conscious people) is LibreWolf, which is a fork of Firefox.