Tuesday, December 27, 2016

A lesson developers seldom learn: Text-based tutorials

There exists a principle of graphical user design that many books and university courses on that subject present: If you feel the need to explain the use of some element via a text label, that's a sign that it's badly designed and could benefit from a re-design. The simpler the element is to use, the more likely it is that it's badly designed if its use has to be explained with text.

(Note that this is different from simply naming an element with a text label. The name of, for example, a button can make it clear and self-explanatory what that button does. Problems begin when the function of said button is not clear from its name alone, and instead there needs to be additional text explaining what it does. Perhaps the most egregious example of this is a dialog with something like "click OK to (do something)", or similar.)

Optimally a user interface would have as little explanatory text as possible, and still be very intuitive and easy to use, where every element is either completely self-explanatory, or at least very easy to figure out (and doable so "safely", in other words, just trying it doesn't cause irreversible, or hard-to-reverse, changes to anything.)

This same idea can be, and optimally should be, applied to video game design as well. It should be a lesson taken to heart, with the vast majority of games: If you feel the need to explain something, or add a tutorial, in the form of text on screen, then you are doing it wrong.

As an example, suppose you are making a 2D platformer, where the playable character can "wall-jump" (in other words, when the character is in mid-air, but touching a wall, the jump button will make the character jump off the wall, to an even bigger height). These wall-jumps may be an integral part of gameplay (even to the point of some levels requiring it to be passed.)

There are essentially two options to inform the player of this mechanic: Explain it with text, or create an introductory level where players will intuitively and naturally discover the mechanic on their own.

Some games embrace and implement this idea of subtle "tutorial levels" to a magnificent extent (the Portal series being a good example.) However, way too many games are too lazy about this and will simply go the easy way of just slapping some text-based tutorial texts on screen. This just feels like lazy game design.

Intuitive controls and game mechanics, learnable without any text-based tutorials, are especially important in casual and mobile games. That's because the average casual gamer does not read text. (There is also the practical consideration that not all players might actually understand the language.) But they are important in all types of games.

But this is a lesson that most game developers have still not learned. Text-based tutorials are ubiquitous, even though games that succeed in being intuitive without them feel much better designed.

Friday, December 23, 2016

PS4 Pro 4k checkerboard interpolation explained

The PlayStation 4 Pro is an upgraded version of the original PlayStation 4, with a more efficient CPU and graphics chip. New to the console is native support for 4k displays, in other words 3840x2160 pixels (which is exactly double that of the standard 1920x1080 "full-HD" resolution in both directions, or, in other words, exactly four times as many pixels.)

The 4k resolution is, however, quite demanding in terms of graphics hardware, and even the PS4 Pro is not capable of rendering many existing PS4 games at that resolution (as it requires four times as much fill rate from the graphics hardware.)

In order to take advantage of the 4k display, however, the PS4 Pro uses by default (unless a game specifically wants to use native 4k) an interpolation mode in which only twice as much fill rate is needed, rather than quadruple. In other words, it renders the game at 1080p twice, covering half of the entire 4k screen, and then interpolating the other half.

It does this by first rendering the game normally at 1080p, and then again, but with the viewport shifted by half a pixel diagonally. When these two images are then interweaved onto the screen, it will cover the entire 4k display with a kind of checkerboard pattern. The remaining pixels are then just interpolated from the surrounding pixels, which is a lightweight operation for the graphics hardware to do.

The following image (which has enlarged pixels) demonstrates this process (click on any of the images below for a full-sized version). The first image is a small section of one of the two 1080p images, the second image demonstrates the checkerboard pattern formed by the two 1080p images, and the third image has the remaining pixels interpolated from those.


As you can see, the final result will have more details to it than the original 1080p version.

Below is the same example in four modes for comparison, how they would look like in a 1080p and a 4k display: The original 1080p version, that version simply upscaled to 4k, the checkerboard interpolated version, and a version rendered in native 4k.


While the checkerboard-interpolated version doesn't exactly match the quality of the native 4k resolution, it's nevertheless closer to it than the 1080p version.

That section, as well as the following ones, are taken from this screenshot. The locations of the details being demonstrated are marked in the image. Click on the images to see a full sized version.


As the last image demonstrates, this technique is not always without its flaws, as it sometimes introduces a very distinctive dithering-like pattern. The quality of the technique is highly dependent on what kind of content the image contains.

Tuesday, December 20, 2016

Choosing the perfect monitor... harder than one might think

I decided recently to buy a new monitor to replace my almost 7-years-old BenQ GL2430. Not that there's something wrong with it; it's actually quite good for its price. But I wanted something better, especially since I have a PlayStation 4 Pro.

There were several things I was looking for.

Panel type


That old monitor used a TN (twisted nematic) display. The advantage of that panel type has historically been a much cheaper price, and good refresh rates. Although in this particular case a 5 ms refresh rate is nothing to brag about. The biggest disadvantage of TN panels is that they have extremely poor viewing angles, especially vertically: With that monitor (as well as most other TN monitors as well), you only need to raise or lower your head a few degrees to see a significant change in colors. It's in fact so bad that the upper area of the display will have a visibly different pixel coloring than the lower side, for the simple reason that the viewing angle is different for them (unless you are watching from quite far away).

A much higher quality alternative is the IPS (in-plane switching) panel. While not absolutely free of that viewing angle problem, it's nevertheless only a very small fraction of that, compared to TN panels. Also IPS panels tend to have more vibrant colors, and be able to reproduce the entire sRGB color spectrum better and more accurately (which is why they are preferred by professionals dealing with graphics, such as printing). Historically their disadvantage has been a higher price, and slower refresh rates. However, in recent years both things have improved significantly, up to the point that IPS panels are seriously starting to compete with TN panels in terms of reaching the mass market.

Since I'm upgrading my monitor, I really prefer the IPS option.

Resolution


The next consideration was the resolution. While 1920x1080 of that BenQ monitor is not horrendous, if I'm upgrading my monitor I would prefer a higher resolution. There are several options for this, but the two most common ones with a 16:9 aspect ratio are the WQHD resolution, ie. 2560x1440 pixels, and the 4K UHD resolution, ie. 3840x2160 pixels (which is exactly twice the amount of pixels of the standard 1920x1080 on both directions).

Now, if I had been buying this for my PC only, then the WQHD resolution might have been the perfect spot. That's because my PC, while quite a hefty gaming PC (i5, GTX970), is not exactly capable of rendering most modern graphically-heavy games at 4k resolution at 60 FPS. But that 2560x1440 ought to be fine. The problem is, however, that I am also buying the monitor for use with the PS4 Pro.

The PS4 Pro only supports "standard" TV resolutions. That means 1080p or 2160p, but nothing in between. If you connect a 1440p monitor to it, it will simply use a 1080p resolution. (This information was actually surprisingly hard to find online.) Using a 1080p resolution on a 1440p monitor might not be the best possible scenario (as each source pixel will be scaled to 1.333 size for the display, meaning that two thirds of the pixels will be blurry.)

While buying a 1080p monitor was not completely out of the question, it was really tempting to be able to use the PS4 Pro at 4k. Thus, preferably, the new monitor would be a 3840x2160 "4k" monitor.

There was one additional requirement for the 4k monitor, though: It had to support HDMI 2.0 as input. That's because HDMI 1.x only supports 4k resolution at 30Hz, while HDMI 2.0 supports it at 60Hz. The PS4 Pro has a HDMI 2.0 output, and can, at least in theory, output in 4k at 60Hz. Some games (current or future) might even do that.

So, primarily I would be looking for a 4k monitor, with a fallback to a very good 1080p one if everything else fails.

G-sync


The next consideration was a long-time dream of mine: G-sync support. It would just be great to be able to have g-sync, and not worry anymore about games not just quite reaching that golden 60 frames per second, and me having to resort to some sort of compromise (ie. either lower the graphical quality, or accept a 30-fps lock.)

And, of course, the price of the monitor shouldn't be exorbitant. While I was ready to spend a decent amount of money on it, I'm not exactly super-rich.

Conclusion


So, one would think that the best possible solution to all of the above is, well, all of the above. Except it turns out that, at least at this point in time, there doesn't seem to be a monitor available with all of the above features!

The closest thing that seems to exist is the Acer Predator XB271HK: 3820x2160, IPS, g-sync. That would seem like the jackpot, except for two things: It only has an HDMI 1.4 input port (meaning that the PS4 Pro wouldn't be able to use it at 60Hz), and a rather hefty price. That former limitation is quite a killer; if only it were HDMI 2.0, it would be an almost perfect monitor, even considering its price. It's really incomprehensible why Acer decided to put a HDMI 1.4 input in it (given that they have HDMI 2.0 in many of their other 4k monitors.)

4k monitors with g-sync are still a real rarity, even if using TN, and especially if using IPS.

Perhaps quite surprisingly, my fallback plan doesn't seem to be much better either: The combination of 1080p, IPS and g-sync seems to be almost as rare! There just don't seem to be almost any monitor with that combination either. (This would be a much easier choice given that it wouldn't even need to have HDMI 2.0.) Basically all 1080p g-sync monitors are TN.

There are quite many 1440p IPS g-sync monitors out there but, as said, that's a bummer in terms of using it with the PS4 Pro.

I in the end, I decided to purchase an Acer S277HK: 4k, IPS, HDMI 2.0, and quite good reviews. And at a discount when I bought it (normally almost 700€, but discounted at 597€). No g-sync, but I'll just have to live without it, as I have done so far.

(In the end, g-sync has a lower limit of 30Hz, after which it just stops working and resorts to your normal screen-teared no-sync, or regular sync. Playing at 30Hz with a regular monitor is not all that bothersome to me, so I suppose I'll just have to be content with that, with the heavier games.)

So, does it look good?


It actually looks better than I expected, both on the PC and the PS4 Pro. In the latter, for instance, prior to this purchase, I was playing Titanfall 2. After I got the monitor I continued the game with it, and it looked really great. I don't know if this game in particular is using 4k natively, or that 2x1080p checkerboard-interpolation scheme, but it certainly looks like native 4k nevertheless. All edges are crisp and razor-sharp, and everything is full of tiny details.

Monday, November 7, 2016

Poker clichés in old western movies

The poker card game is one of the staples of western movies (especially older ones, made in the 70's and earlier.) Almost invariably it will be depicted as straight poker, meaning that each player is dealt five face-down cards, from which they can swap any number of cards with new ones, after which there is a round of betting, with possible raises. When non-folding players have all bet the same amount, cards are revealed and the winner takes the pot.

The choice of this poker variant is probably realistic, as it's one of the oldest variants, and probably the most common in the United States in the 19th century. Variants that are today more popular, like Texas Hold'em, didn't exist back in the 1800's.

There are many clichés that you can see in lots of these movies, especially older ones. Many of them feel quite strange.

One of the most common ones is the player who runs out of money and can't match a bet. Often he will then search for valuables to bet, or ask for a loan, or something similar. The implication being that if he can't match the bet, he just loses by default.

This seems highly strange. I have no idea what the poker betting structure was in those times, but it feels highly, highly unfair, to the point of making the whole game moot. The richest player would always win by making an all-in every single time. Since no other player could match his bet, he would always win by default. Which would make absolutely no sense.

I remember one movie in particular, where a woman was playing poker against a group of men, and she ran out of money and couldn't match a bet. It ended up with all the players going to a bank, where she asked the bank's owner for a loan.

Maybe this was solved back in those days with betting limits (ie. you can't bet more than a certain amount.) However, I don't remember ever seeing this mentioned or depicted in any of these movies.

The modern solution to this problem is the side pot mechanic: If you make an all-in, and somebody then raises to even more than that, the extra money goes to a side pot, where any remaining players will continue betting any amounts above the main pot bet. (On showdown, or if all other players fold, the side pot goes to the winner among those other players, after which a showdown of the main pot, now including this original all-in player, will ensue.) This means that you can always participate, even if you have one single coin left. (Your current amount of money simply limits how much you can win at most, which is that amount times the number of players, assuming all match your bet.)

As said, I have no idea how this problem was resolved back in those times. The cliché so popular in movies (ie. it's not solved at all) seems completely nonsensical, though. It might make for drama or comedy, but it's still nonsensical.

Another cliché, perhaps even a more common one, is the crucial showdown, where the bad (or at least antagonistic, often unpleasant) guy shows a very strong hand, like a straight or a full house, and without waiting for the protagonist to show his hand goes to grab the pot. Of course only to be stopped by the protagonist showing even a stronger hand, like four of a kind. The antagonist is shamed with his hands around the money, which he then can't take.

Very commonly this antagonist is depicted as a very experienced and ruthless poker player. Which raises the question of which poker player would just go to grab the pot without waiting to see his opponent's hand? This feels a bit unrealistic, and artificially played for drama or comedy.

This one is often not unrealistic, especially if we are talking about a 19th century decrepit saloon in the middle of nowhere, or a friendly game at someone's home, or something along those lines, because it probably happens a lot in real life as well (probably because it's so common in movies and people will imitate them!) but another cliché is the player who does a "I call your hundred... and raise 500!" This may be ok in casual informal play, but especially in more official tournament play, and in groups with more formal rules, that's an improper way of announcing your action. This especially if it's accompanied by the player first throwing the 100 into the pot and then 500 more (which is likewise very common in these movies.)

In fact, in most tournaments, with strict rules, if you say "I call", then that's it: You have called. You can't then raise. It doesn't matter if you amend those words with "and" anything. The proper way to raise, if you say it out loud, is to say directly "I raise". And first putting forward some coins and then more is outright forbidden. You have to make the entire bet at once, not in parts.

Tuesday, October 25, 2016

Reverse typecasting

In movies and television, typecasting is the phenomenon that sometimes happens that a certain actor is strongly identified with a certain kind of role, and gets typically hired for such roles. And, in fact, most often the public outright expects them to be in that kind of role.

Sylvester Stallone and Arnold Schwarzenegger are typical examples, where they have been frequently typecasted as the tough protagonist of an action movie. Many comedians get often typecasted in comedies, and can have a hard time breaking through other, more serious roles. Actors like Morgan Freeman often get cast in authority figure roles (such as the President of the United States).

I have noticed, however, that there exists another, much less talked about phenomenon, which is pretty much the polar opposite of typecasting. I don't think this even has a name. I'm calling it "reverse typecasting".

This is when an actor is so famous for a certain role that it becomes almost unthinkable to hire him or her in another similar role, because of the strong association with that one particular character.

For example, David Caruso is the lead actor in CSI: Miami, as Horatio Caine. He is best known for this role.

This, in my opinion, pretty much excludes him from being cast in another police procedural. Just imagine the confusion of the audience if he were to be cast in another such TV show. "Hey, that's the guy from CSI Miami! Is he supposed to be the same character?" It would be highly distracting because of this. It would be hard to not think about him as Horatio Caine, no matter how much the other show tries to portray him, and the show, as completely unrelated.

Average vs. median

Most people are familiar with the concept of average (or, in more technical terms, arithmetic mean) when talking about numbers. The average of a group of numbers is their sum divided by their amount. This concept crops up all the time in all kinds of things, like average salaries, average prices, average viewership... Statistics just love averaging things.

As an example, the average of the numbers 1, 3, 4, 10 and 25 is 8.6.

A lesser-known related concept is the median. Many people don't even know what it means, and others might have heard or read the name but not really know it either. When they hear what it is, it might seem like such an arbitrary, even useless, thing.

The median of a group of numbers is simply the middle one, when the numbers are sorted in increasing order. (If there's an even amount of numbers, then it's the average of the two middle numbers.)

In the above example, the median of 1, 3, 4, 10 and 25 is 4, as that's the third number in the ordered list of five numbers.

But what use is the median for anything? As said, it may feel like such an arbitrary and even useless thing to calculate. However, there are many situations where the median is actually more useful and informative than the average.

The good thing about the median is that it kind of automatically discards extreme outliers from the equation.

For example, suppose that there's a market for a product, like an individual card from a trading card game. There may be hundreds and hundreds of sellers for that particular card. In order to get a picture of how valuable that particular card is (eg. compared to other cards from the game), you may want to know a number that reflects the overall pricing.

The average of all the prices might sound like a good idea at first, but its problem lies in what I mentioned earlier: Extreme outliers may distort the figure, making it less informative and useful.

For example, maybe 30 sellers are selling the card with prices ranging from 50 cents to 1 dollar. But there's one seller that, for whatever reason, is selling it for 1000 dollars. If there's eg. an automatic server-side program that collects all these selling prices and averages them, that one outlier would skew the result drastically, making it almost useless. It would make the card much more valuable than it really is.

The median of the prices, however, can be much more useful. In this example, the average may be something like 32 dollars (which would mean, if taken at face value, that this is a really expensive card), while the median could be something like 74 cents (which would mean this is a moderately priced card).

In this case the 74 cents is much closer to the truth than the 32 dollars. The latter number is heavily skewed by that one outlier. The median automatically discards such outliers, making the resulting number much more useful.

"Downloading" and "uploading" in movies

Hollywood movies dealing with computers often love to use fancy terms (well, fancy to the layman's ears) like "downloading" and "uploading", but don't seem to care much about which one of those terms is accurate and proper for the situation, and will freely use whichever term randomly. Thus you get dialogue like "I'm going to download this file to the bad guy's computer", which may sound cringey to the more technically adept viewer.

On the other hand, the two terms are not actually absolutely unambiguously defined, even in technical parlance. There are two main schools of thought on this:

The first one considers the relationship between which computer is being used by the person, and which one is at a remote location. If the person is physically using computer A, and a file is being transferred between it and some remote computer B, the proper term depends on the direction of transfer: If the file is being transferred from the remote computer B to the local computer A (ie. the computer that's directly being used by the person), it's "downloading". The other direction is "uploading".

The other school of thought considers the role of the computers themselves: If one computer is the "server" and the other is the "client", then "downloading" means transferring a file from the server to the client computer, while the other direction is "uploading". This even though the person might be directly using the server computer rather than the client one. (This kind of thinking considers the "server" to be "higher" on a hierarchy than the "client", and thus the term is defined by whether the file is going "up" or "down" in the hierarchy. The same principle holds if the server is being itself a "client" of another, bigger server, which is thus "up" in the hierarchy.)

Of course the situation could be more complex than that. For example, the person, using computer A, might be transferring a file from remote computer B to another remote computer C. Is that "downloading" or "uploading"?

In the end, it's actually not as clear-cut than it might seem at first.

Tuesday, October 18, 2016

Inspired video game cover art?

Just something funny I stumbled across while browsing games on Origin:


Is it just me, or are there striking similarities between the two? (And I didn't even edit them to be together. This is a direct unmodified snapshot.)

At least they are both from Ubisoft, so I suppose there isn't any plagiarism going on.

Monday, October 3, 2016

The origins of Chuck Berry's famous guitar riff

Take a listen to the 1964 song Fun Fun Fun by The Beach Boys. One immediately notices when the song starts that that guitar intro sounds awfully familiar. Well, yes indeed, it sounds virtually identical to the guitar intro of Johnny B. Goode by Chuck Berry, made in 1958.

Is this an instance of blatant plagiarism? Not really. It's not that simple.

Chuck Berry himself was certainly fond of that particular guitar intro, as he used it in very similar forms in a multitude of his songs, including Roll Over Beethoven, Let It Rock, Carol, Little Queenie, and Back In the U. S. A.

But the thing is, that intro wasn't actually composed by Chuck Berry. It was originally composed by Louis Jordan in 1946 for his song Ain't That Just Like a Woman.

If there is any plagiarization going on, it's by Chuck Berry (even though I have no idea if he had permission to do so.)

Friday, September 16, 2016

Why you shouldn't believe the hype, part 2

I wrote previously why you should not listen to the hype, when it comes to video games. Well, yet another excellent and rather infamous example has appeared recently: No Man's Sky.

Like with Evolve, this one was also highly anticipated and extremely hyped. However, No Man's Sky is arguably an even more blatant example because of all the broken promises made by the hype. At least with Evolve not many promises were made that were then broken or left out of the game; it simply turned out that what sounded like a great idea wasn't all that great after all; it just didn't work.

No Man's Sky, however, is more blatant than that. The pre-release material, teaser trailers and project leader interviews all promised a large bunch of features that were not in the final version of the game after all.

For example, teaser trailers were highly scripted animations that showed things that are not in the final game. These include things like the ability to fly above the surface of planets (in the actual game you can only land from orbit in a scripted sequence; you can't fly over the surface), gigantic space battles with factions and fleet warping in to join the fight (neither of which are in the actual game; moreover, the huge destroyer ships do not even move in the game), herd behavior including things like stampedes and animals felling down trees, sand planets with gigantic sandworms, and mysterious teleportation devices on the surface of some planets (again, none of which appear in the actual game).

Moreover, interviews with the project leader hinted at many features that, once again, do not appear in the game. Such as the game being a massively online multiplayer game, where people could encounter each other. Or having a wide variety of available space ships for different purposes. Or factions that the player can join, or fight against. It's not a multiplayer game, and people cannot encounter each other. And there is only one type of ship. And there are no factions.

And that's just scratching the surface. The game has been given the moniker "No Man's Lies" because of all that.

The amount and severity of broken promises and false advertisement was in fact so bad that several distributors (including Steam, Amazon and Sony) offered a special refund policy for this game.

Thursday, September 15, 2016

Difficulty levels in video games. Which one to choose?

While I have been playing video games since the early 1980's, the question of difficulty levels didn't really come up until I bought my first PC in the mid 90's. Many PC games from the time (one of the most notorious examples being Doom) had difficulty levels to choose from.

For a quite long time I had the principle that I would always, always play with the hardest difficulty. After all, I paid a hefty sum for a video game (especially since back then I was just a university student with a bugger-all income), which made it quite an expensive commodity, and I wanted it to last for as long as possible. I hated games that were too short, especially if they were full priced, or even nearly full priced.

With some games choosing the hardest difficulty actually made a lot of sense because it was not artificial difficulty. For example, some point&click games of the time had (usually two) "difficulty" levels, which meant in practice that on the "harder" level there were more puzzles, and some puzzles might be a bit more complex, or involved more steps. Which was absolutely perfect for me. I would have hated playing those games with less or easier puzzles.

Back in those days I was also more invested in beating games on the harder difficulty levels, even when they were at least approaching artificial difficulty. With some games it truly made them more challenging in an enjoyable manner. For example, I remember Midtown Madness having difficulty levels, and the harder levels meant that you were racing against different, more powerful cars, than the easier levels.

By the time I played Half-Life 2, I still had this mentality in full force, and I played it on its hardest difficulty. It was perhaps this one that finally made me lose that principle (or, at the very least, it was one of the last games I ever played on hardest difficulty due to wanting the game to last longer and be more challenging.) In this game the difficulty really is quite artificial, and borderline makes the game only frustrating rather than challenging.

My mindset has changed quite a lot from those times. I have become a huge consumer of video games, and I purchase tons of them. While I enjoy a very long game (those that take over 50 hours to play through) from time to time, if really well made, I prefer a video game to be enjoyable rather than it lasting as long as possible. I don't mind if a game lasts only 10 hours, if it's a really enjoyable experience (although, to be fair, that's already starting to be on the quite short side, especially if we are talking about a full-priced game; for a 20€ or cheaper game that's completely fine. I have played 2-hour games that have been absolutely marvelous, and the length has been completely fine.)

The drawback of buying so many games is, of course, that there are only so many hours of free time in my life to play them. Thus in many cases I actually prefer games to be shorter, so that I can get to play more of them. A game needs to be really epic and exceptionally well made for me to want to play more than about 30 hours of it. (There are examples of this of course. For example Steam reports that I have played Skyrim for 110 hours, which is quite unusually long. For Fallout: New Vegas it reports 60 hours.)

In general, I thus nowadays prefer playing such games in their "normal" difficulty.

I have, in fact, noticed that the "normal" difficulty tends to be the most balanced one in most games. Sort of like the intended difficulty designed by the developers (as in that it feels like they designed the game from the ground up to be played on that difficulty, and then only in the very last stages added the other difficulty levels as an option.) Choosing a harder difficulty very often makes it feel artificially hard, as in the enemies being "unrealistically" hard to kill (if you can talk about "realism" in such video games; it may be more accurate to say that it starts breaking willing suspension of disbelief when enemies are too hard to kill.) These harder difficulty levels are often not as enjoyable, as they feel unbalanced and needlessly difficult. This, in fact, goes all the way back to Half-Life 2 (and probably beyond).

Some games, however, entice the player to choose the hardest difficulty level. I have sometimes been suckered into choosing it because of that, and regretted it later. As a recent example, Battlefield 4 has this to say about its hardest difficulty level:


Well, I'm quite an experienced player, so I decided to choose that difficulty level. I stopped playing the game somewhere around half-way through, out of sheer frustration, as enemies were way too hard to kill, and it was way too easy to die. I tried a bit of the beginning of the game with the normal difficulty, and it immediately felt a lot more balanced (enemies actually died from a moderate amount of gunfire instead of having to empty two clips into them). I might some day restart the game on this difficulty level and play it through like that.

A few games, however, are quite different from this, and "difficulty level" means something quite different. An interesting case is Alien: Isolation. While there are enemies to kill in this game, the main focus is the alien, which is unkillable (this is more a horror survival than a first-person shooter). Thus "difficulty level" means something else entirely than how hard enemies are to kill. In this case it affects the behavior of the alien, which is an interesting deviation from the standard formula.

Also this game entices the player to choose the hardest difficulty:


And here, too, it's not quite clear whether this really is the best way to play the game or not, regardless of what the developers are saying there. The reason, however, and perhaps obviously, is quite different from your ordinary first-person shooter like Battlefield 4.

In the "hard" difficulty level, in the levels where you are trying to hide from the alien, that's chasing you, the alien will constantly be in your vicinity. You don't have a moment to take a breath. The alien will always be there somewhere, ready to jump at you at any moment you make the tiniest of mistakes. It never wanders off too far. It's like it can constantly sense your presence in the vicinity and never goes too far away. Your tasks thus indeed become a lot more difficult, as you must watch for the alien much more closely, and time your movements much more precisely.

Many people report, however, that the game is actually best played on the "easy" difficulty level. That it's actually much scarier and has much more ambience to it. That's because now the alien isn't actually constantly in your vicinity but can wander off somewhere else, even to the opposite side of the level. What makes this scarier and more immersive is that now you truly don't know where the alien really is. It could be just around the corner, or it could be on the other extreme of the level. You can't know. And thus it becomes much more surprising, and terrifying, when the alien suddenly jumps at you when you least expect. In the hard difficulty level you just know that the alien is just a couple of corridors away at most, so there aren't many surprises. In the easy difficulty level the anxiety is much higher because now you really don't know where the alien is or when it might turn up.

I have yet to play the game through again (I very rarely play games twice), but if I ever do, I will most certainly try the easy difficulty.

Saturday, September 3, 2016

The downside of single+multiplayer combo games

Many games, especially big-budget first-person shooters, often have two completely separate modes of play: A single player "campaign", and online multiplayer (which is very often some kind of "arena shooter", either in a free-for-all mode or in teams.) Quite often these two modes are so separate, so distinct, and so independent of each other, that you could just as well consider them two different games bundled into one package. They might use the same engine, assets and game mechanics, but otherwise they have usually nothing to do with each other, and often do not even interact with each other in any way (eg. by having things unlocked in one mode becoming available in the other.)

(In fact, with some games the two modes are so separate that you even have to launch them separately, often from a launch startup menu dialog.)

I assume that most companies produce these "two games in one" combos in order to appeal to the widest possible audience, ie. to those who buy such games primarily for the single-player campaign, and those who are mostly or even exclusively interested in the online multiplayer mode.

There are downsides to this, however.

One is that making two games is more expensive than making one. Sure, it's not nearly as expensive as making two completely independent games (because the same core engine and assets can be used in both playing modes, which probably cuts down development time and costs quite a lot, compared to having to make two entirely separate games), but it's also certainly more expensive than making just one of the modes. Perhaps for this reason in recent years more and more examples of single-player-only and multiplayer-only games have been published. Rather than split resources into making essentially two games, they put all their resources into just one of them. (Of course by doing this they are cutting a chunk of their target audience out, but it may still be profitable to do so.)

Another problem is that almost universally, publications and reviewers will give only one score for the entire game, rather than scoring the two sub-modes separately. (Even if some publications do give two scores, which is rare but not unheard of, aggregate scoring services will only show one score, such as the average of the two.) Since the two modes are often completely independent of each other, and do not affect each other, it would be fairest, and most useful for the potential buyer, to know the reviewers' scores for each mode separately. But this basically is never the case, and instead you get only one score for the combo.

This means that if eg. the single-player mode is excellent, but the multiplayer mode is absolute rubbish, the latter will drag the overall score down. However, for a player who is only interested in the single-player mode (like me), it would be much more informative and beneficial to know the review scores for that mode only. If there is a great disparity between the two modes, that muddles things.

I really wish that such games were actually considered two separate games, in terms of reviews and review scores, even if they come packaged in the same product.

Thursday, August 4, 2016

Xbox One S: Too late?

Microsoft recently published a new version of their Xbox One, the Xbox One S. From what I have seen, this is the console that the original Xbox One should have been, when it was released three years ago. But before going into that, let me recapitulate the tumultuous story of the Xbox One. (I have written about many of these things in other blog posts, in my other blog, but allow me to briefly summarize the whole story.)

The story really begins with the Xbox 360. Microsoft got an entire year of a head start over their main competitor, the PlayStation 3, and the Xbox 360 was enormously successful almost from the start. The console did many things right, while the PS3 did many things wrong.

For one, being published an entire year later did certainly not help the PS3. It also didn't help that, unlike the Xbox 360, the console used a very exotic processor architecture that was very difficult for game engines to optimize for. It took a year or two after the launch of the console that the big game engines finally started to be able to take advantage of the unusual processor design, and games started really competing with the Xbox 360 in terms of performance. For quite many years the PS3 was seriously an underdog in the competition. Quite fortunately for Sony, however, it almost miraculously succeeded in catching up, and actually ended up selling almost as many units (about 80 million, compared to the about 84 million Xbox 360's.) These sales figures were probably helped quite a lot by the fact that the PS3 had a BluRay drive, making it, at the time, a surprisingly affordable BluRay player.

While the PS3 managed in the end to catch up with the Xbox 360 in terms of success, the latter led the race for years, and it can arguably be said that it kind of won said race by a small margin.

Maybe for this reason Microsoft became, perhaps, a bit too sure of themselves, and tried too much to "innovate" with their next-generation console, ie. the Xbox One. They made many, many mistakes that cost them quite a lot in popularity.

Firstly, Microsoft wanted the Kinect to be an integral part of the console (so much so that there would be no versions of the console without it; it would be a mandatory peripheral that comes with the console). They really, really tried to push the Kinect as part of the Xbox experience, and induce game developers to use it. However, both the pre-launch and post-launch audience reception of this idea was lackluster, to say the least. Gamers were not exactly thrilled, especially given that the Kinect would raise the price of the console by quite a lot (which it really did.)

Microsoft's second mistake was to announce that the Xbox One would need to be connected to the internet, and for the Kinect to be always connected and turned on, or else it would refuse to allow games to be played on it. This announcement received a ton of backlash, all the way from gamers to big name critics. The backlash was in fact so enormous that Microsoft reversed this policy prior to launch. This was probably a very smart move, but it didn't exactly help their or their console's reputation; the damage was already done, and only partially fixed.

Their third mistake was to announce that copies of games would be tied to single user accounts, which meant that you wouldn't be able to give a game you had bought to a friend, or to buy used games. Same thing: Enormous backlash, causing them to reverse the policy prior to launch, but damage to reputation already done, only partially fixed. In fact, Sony got some free points by riding on this, and announcing that the PS4 would not have any such limitations.

Their fourth mistake was to concentrate way too much on multimedia features, and too little on actual games. In fact, in their big pre-launch E3 conference, their entire presentation was about multimedia features (such as video streaming, online video rental, and so on and so forth), and almost no mention of actual video games. You know, the main reason that people actually buy video game consoles. Microsoft really tried to "innovate" too much with the Xbox One. But this didn't sit well with their audience, and the reception was, once again, quite lackluster.

The console itself, once released, did not have the best possible reception either. Not only was it slightly less efficient than the PS4, but it was amazingly bulky (resembling a big VCR from the 80's), and had an external power supply ("power brick") with its own fan... which is on all the time, when it's plugged into a wall; even if the console itself is in sleep mode. And it's quite noisy.

Immediately after launch, the PS4 sold like hotcakes (possibly breaking records on the fastest-selling console ever, after launch). The sales figures for the Xbox One were significantly more moderate. It didn't exactly help that the Xbox One was significantly more expensive, mainly due to the useless Kinect peripheral.

Credit where credit is due, Microsoft tried to learn from their mistakes and went into quite a damage control mode. Some time after launch (about a year or so), they announced a version of the console without the Kinect, which was significantly cheaper, bringing its price point to the same level as the PS4 (and even slightly cheaper). Of course this had the unfortunate consequence that many people who had bought the original version with the Kinect (because there was no alternative) felt a bit defrauded because the Kinect was not, after all, an essential component, and game developers are probably not going to make many games for it. But that was just a minor drawback.

Secondly, in their 2015 E3 conference presentation, their tone had shifted radically. Now their presentation was all about games, games, and more games. Nothing about the multimedia features, and no mention of the Kinect. This was exactly what the audience wanted. Unfortunately, two years too late. Another announcement was backwards compatibility with most Xbox 360 games. Great... except that once again it was two years too late. If this had been a feature right from launch, it would have probably boosted sales figures.

So now, finally, three years after launch, we get to their improved version of the console: The Xbox One S. The new version is significantly smaller in size (almost the same size, in volume, as the PS4, although maybe just slightly larger), has an internal power supply, and is slightly more efficient. And most curiously, it does not have a Kinect connection port at all. (A Kinect can still be connected to it, but it requires an expensive adapter. It's quite clear that Microsoft is not expecting anybody to actually want a Kinect with it.)


(Not pictured in the image: The power brick of the original version. Which is about a quarter of the size of the Xbox One S.)

The Xbox One S is, arguably, what the original Xbox One should have been from the very start.

It seems that Microsoft is constantly late to the party with this console generation. Things that should have been part of the initial release are coming years later. This is almost the exact reversal of their performance in the previous console generation, where they were the clear leaders for years.

Will the Xbox One S be enough to save the console, or is it too late? My bet is that it won't save it, especially since Microsoft has already hinted at their next console version. I don't think many people, especially those in the know, will want to purchase an Xbox One now, especially if they own a PS4 already. (Some people who don't have either, and really want the Xbox, eg. because of brand loyalty, might now be induced to buy the S version, but I'm betting these people are quite a small minority.)


Edit 8.8.2016: Anecdotal evidence is making the rounds that the Xbox One S is actually selling really well, at least on the initial days after release, with many stores being sold out. These are of course just unconfirmed rumors at this point, because no official sales figures have yet been released, but it seems that my prediction might turn out to be wrong. (It is, of course, also possible that even if the rumors are completely true, the sales will plummet after the initial rush.) I may make a new edit after more reliable figures come out.

Saturday, July 16, 2016

Turning 3D off on a 3DS: The devil is in the details

I bought myself a New Nintendo 3DS (not just a brand new one, but the device named "New Nintendo 3DS"). I had been for quite a long time pondering about buying a 3DS, and the new version of the console finally gave me the impetus of buying one.

One of the major new hardware features of the console is better 3D via eye-tracking. The original 3DS has a parallax barrier 3D screen (which in practice means that the 3D effect is achieved without the need of any kind of glasses). Its problem is that it requires a really precise horizontal viewing angle to work properly. Deviate from it even a slight bit, and the visuals will glitch. The New 3DS improves on this by using eyetracking, and adjusting the parallax barrier offset in real time depending on the position of the eyes. This way you can move the device or your head relatively freely, and the 3D effect is not broken. It works surprisingly well. Sometimes the device loses track of your eyes and the visuals will glitch, but they fix themselves when it gets the track again. (This happens mostly if you eg. look away and then back at the screen. It seldom happens while playing. It sometimes does, especially if the lighting conditions change, eg. if you are sitting in a car or bus and the position of the sun changes rapidly, but overall it happens rarely.)

There is, however, another limitation to the parallax barrier technology, which even eyetracking can't fix: The distance between your eyes and the screen has to be within a certain range. Any closer or farther, and the visuals will start glitching once again. There is quite some leeway in this case, ie. this range is relatively large, so it's not all that bad. And the range is at a relatively comfortable distance.

Curiously, the strength of the 3D effect can be adjusted with a slider.

Some people can't use the 3D screen because of physiological reasons. Headaches are the most common symptom. For me this is not a problem, and I really like to play with full depth.

There are a few situations, however, where it's nicer to just turn the 3D effect off. For instance if you would really like to look at the screen up close. Or quite far (like when sitting on a chair, with the console on your lap, quite far away from your eyes.) Or for whatever other reason.

The 3D effect can be turned completely off, which makes the screen completely 2D, with no 3D effect of any kind: Just slide the slider all the way down, and it turns the 3D effect completely off.

Except it didn't do that! Or so I thought for one and a half year.

You see, whenever the 3D effect is turned on, no matter how small the depth setting, the minimum/maximum eye distance problem would always be there. If you eg. look at the screen too closely, it would glitch, even if so slightly, and quite annoyingly. With a lower depth setting the glitching is significantly less, but it's still noticeable. Some uneven jittering and small blinking happens if you move the device or your head at all, when you are looking at the screen from too close (or too far).

Even though I put the 3D slider all the way down, the artifacts were still there. For the longest time I thought that this might be some kind of limitation of the technology: Even though it claimed that the 3D could be turned off, it wasn't really possible fully.

But the curious thing was that if I played any 2D game, with no 3D support, the screen would actually be pure 2D, without any of the 3D artifacts and glitching in any way, shape or form. It would look sharp and clean, with no jittering, subtle blinking, or anything. This was puzzling to say the least. Clearly the technology is capable of showing pure and clean 2D, with the 3D effect turned completely off. But for some reason in 3D games this couldn't be achieved with the 3D slider.

Or so I thought.

I recently happened to stumble across an online discussion about turning off the 3D effect in the 3DS, and one poster mentioned that in the 3DS XL there's a notch, or "bump", at the lower end of the slider, so that you have to press it a bit harder, and it will get locked into place.

This was something I didn't know previously, but somehow this still didn't light a bulb in my head. However, incidentally, when I was playing with the 3D slider, I happened to notice, when I looked at it from a better angle, that the slider wasn't actually going all the way down. There was a tiny space between the slider and the end of the slit where it slides, even though I thought I had put it all the way down.

Then it dawned on me, and I remembered that online discussion (which I had read just some minutes earlier): I pressed the slider a bit harder, and it clicked into its bottommost position. And the screen became visibly pure and clean 2D.

I couldn't help but laugh at myself. "OMFG, I have been using this damn thing for a year and a half, and this is the first time I notice this?" (Ok, I didn't think exactly that, but pretty much the sentiment was that.)

So yeah, the devil is in the details. And sometimes we easily miss those details.

I have to wonder how many other people don't notice nor realize this.

Thursday, July 7, 2016

Which chess endgame is the hardest?

In chess, many endgame situations with only very few pieces left can be the bane of beginner players. And even sometimes very experienced players as well. But which endgame situation could be considered the hardest of all? This is a difficult (and in many ways ambiguous) question, but here are some ideas.

Perhaps we should start with one of the easiest endgames in chess. One of the traditional endgame situations that every beginner player is taught almost from the very start:


This is an almost trivial endgame which any player would be able to play in their dreams. However, let's make a small substitution:


Now it suddenly became significantly more difficult (and one of the typical banes of beginners). In fact, the status of this position depends on whose move it is. If it's white to move, white can win in 23 moves (with perfect play from both sides). However, if it's black to move, it's a draw, but it requires very precise moves from black. (In fact, there is only one move in this position for black to draw; every other move loses, if white plays perfectly.) These single-pawn endings can be quite tricky at times.

But this is by far not the hardest possible endgame. There are notoriously harder ones, such as the queen vs. rook endgame:


Here white can win in 28 moves (with perfect play from both sides), but it requires quite tricky maneuvering.

Another notoriously difficult endgame is king-bishop-knight vs. king:


Here white can also win in 28 moves (with perfect play), but it requires a very meticulous algorithm to do so.

But all these are child's play compared to the probably hardest possible endgame. To begin with, king-knight-knight vs. king is (excepting very few particular positions) a draw:


But add one single pawn, and it's victory for white. But not a white pawn. A black pawn! And it can be introduced in quite many places, and white will win. As incredible as that might sound:


Yes, adding a black pawn means that white can now win, as hard as that is to imagine.

(The reason for this is that the winning strategy requires white to maneuver the black king into a position where it has no legal moves, several times. Without the black pawn this would be stalemate. However, thanks to the black pawn, which black is then forced to move, the stalemate is avoided and white can proceed with the mating sequence.)

But this is a notoriously difficult endgame. So difficult, that even most chess grandmasters may have difficulty with it (and many might not even know how to do it). It is so difficult that it requires quite a staggering amount of moves: With perfect play from both sides, white can win in 93 moves.

"Tank controls" in video games

3D games are actually surprisingly old. Technically speaking some games of the early 1970's were 3D, meaning they used perspective projection and had, at least technically speaking, three axes of movement. (Obviously back in those days they were nothing more than vector graphics drawn using lines and sprites, but technically speaking they were 3D, as contrasted to purely 2D games where everything happens on a plane.) I'm not talking here about racing games that give a semi-illusion of depth by having the picture of a road going to the horizon and sprites of different sizes, but actual 3D games using perspective projection of rotateable objects.

As technology advanced, so did the 3D games. The most popular 3D games of the 80's were mostly flight simulators and racing games (which used actual rotateable and perspective-projected 3D polygons), although there were obviously attempts at some other genres as well even back then. It's precisely these types of games, ie. flight simulators and anything that could be viewed as a derivative, that seemed most suitable for 3D gaming in the early days.

It is perhaps because of this that one aspect of 3D games was really pervasive for years and decades to come: The control system.

What is the most common control system for simple flight simulators and 3D racing games? The so-called "tank controls". This means that there's a "forward" button to go forward, a "back" button to go backwards, and "left" and "right" buttons to turn the vehicle (ie. in practice the "camera") left and right. This was the most logical control system for such games. After all, you can't have a plane or a car moving sideways, because they just don't move like that in real life either. Basically every single 3D game of the 80's and well into the 90's used this control scheme. It was the most "natural" and ubiquitous way of controlling a 3D game.

Probably because of this, and unfortunately, this control scheme was by large "inherited" into all kinds of 3D games, even when the technology was used in other types of games, such as platformers viewed from third-person perspective, and even first-person shooters.

Yes, Wolfenstein 3D, and even the "grandfather" of first-person shooters, Doom, used "tank controls". There was no mouse support by default (I'm not even sure there was support at all, in the first release versions), and the "left" and "right" keys would turn the camera left and right. There was support for strafing (ie. moving sideways while keeping the camera looking forward), but it was very awkward: Rather than having "strafe left" and "strafe right" buttons, Doom instead had a toggle button to make the left and right buttons strafe. (In other words, if you wanted to strafe, you had to press the "strafe" button and, while keeping it pressed, use the left and right buttons. Just like using the shift key to type uppercase letters.) Needless to say, this was so awkward and impractical that people seldom used it.


Of course all kinds of other 3D games used "tank controls" as well, including many of the first 3D platformers, making them really awkward to play.

For some reason it took the gaming industry a really long time to realize that strafing, moving sideways, was a much more natural and convenient way of moving than being restricted to only being able to move back and forward, and turning the camera. Today we take the "WASD" key mapping, with A and D being strafe buttons, for granted, but this is a relatively recent development. As late as early 2000's some games still hadn't transitioned to this more convenient form of controls.

The same goes to game consoles, by the way. "Tank controls" might even have been even more pervasive and widespread there (usually due to the lack of configurable controller button mapping). There, too, it took a relatively long time before strafing became the norm. The introduction of twin stick controllers made this transition much more feasible, but even then it took a relatively long time before it became the standard.

Take, for example, the game Resident Evil 4, released in 2005 for the PlayStation 2 and the GameCube, both of which had twin stick controllers. The game still used tank controls, and had no strafing support at all. This makes the game horribly awkward and frustrating to control; even infuriatingly so. And this even though modern twin-stick controls had already been the norm for years (for example, Halo: Combat Evolved was published in 2001.)

Nowadays "tank controls" are only limited to games and situations where they make sense. This usually means when driving a car or another similar vehicle, and a few other situations.

And not even always even then. Many tank games, perhaps ironically, do not use "tank controls". Instead, you can move the vehicle freely in the direction pressed with the WASD keys or the left controller stick, while keeping the camera fixated in its current direction, and which can be rotated with the mouse or the right controller stick (and which usually in such games makes the tank aim at that direction). In other words, direction of movement and direction of aiming are independent of each other (and usually the tank aims at the direction that the camera is looking). This makes the game a lot more fluent and practical to play.

The origins of the "Lambada" song

The song "Lambada" by the pop group Kaoma, when released in 1989, was one of these huge hits that people started hating almost as soon as it hit the radio stations, mainly because of being overplayed everywhere.

Back then, its composition was generally misattributed to Kaoma themselves. It wasn't until much later that I heard that was actually just a cover song, not an original one. However, it's actually a bit more interesting than that.

There are, of course constantly hugely popular hits that turn out to be just cover songs by somebody else eg. from the 50's or 60's. This one doesn't go that far back, but it's still interesting.

The original version, "Llorando se Fue" was composed by the Bolivian band Los Kjarkas in 1981. It's originally in Spanish, and while the melody is (almost) the same, the tone is quite different. It uses panflutes, is a bit slower, and is overall very Andean in tone.

See it on YouTube.

This song was then covered by the Peruvian band Cuarteto Continental in 1984. They substituted the panflute with the accordion, already giving it the distinctive tone, and their version is more upbeat and syncopated.

See it on YouTube.

The song was then covered by Márcia Ferreira in 1986. This was an unauthorized version translated to Portuguese, is a bit faster, and emphasizes the syncopation, and is basically identical to the Kaoma version, which was made in 1989.

See it on YouTube.

The Kaoma version, which is by far the best known one, perhaps emphasizes the percussion, and the syncopation even more.

See it on YouTube.

The dilemma of difficulty in (J)RPGs

The standard game mechanic that has existed since the dawn of (J)RPGs is that all enemies within given zones have a certain strength (which often varies randomly, but only within a relatively narrow range.) The first zone you start in has the weakest enemies, and they get progressively stronger as you advance in the game and get to new zones.

The idea is, of course, that as the player gains strength from battles (which is the core game mechanic of RPGs), it becomes easier and easier for the player to beat those enemies, and stronger enemies ahead will make the game challenging, as it requires the player to level up in order to be able to beat them. If you ever come back to a previous zone, the enemies there will still be as strong as they were last time, which usually means that they become easier and easier as the player becomes stronger.

This core mechanic, however, has a slight problem: It allows the player to over-level, which will cause the game to become too easy and there not being a meaningful challenge anymore. Nothing is stopping the player, if he so chooses, to spent a big chunk of time in one zone leveling up and gaining strength, after which most of the next zones become trivial because the strength of the enemies are not in par. This may be so even for the final boss of the game.

The final boss is supposed to be the ultimate challenge, the most difficult fight in the entire game. However, if because of the player being over-leveled the final boss becomes trivial, it can feel quite anti-climactic.

This is not just theoretical. It does happen. Two examples of where it has happened to me are Final Fantasy 6 and Bravely Default. At some point by the end parts of both games I got hooked into leveling up... after which the rest of the game until the end became really trivial and unchallenging. And a bit anti-climactic.

One possible solution to this problem that some games have tried is to have enemies level up with the player. This way they always remain challenging no matter how much the player levels up.

At first glance this might sound like a good idea, but it's not ideal either. The problem with this is that it removes the sense of accomplishment from the game; the sense of becoming stronger. It removes that reward of having fought lots of battles and getting stronger from them. There is no sense of achievement. Leveling up becomes pretty much inconsequential.

It's quite rewarding to fight some really tough enemies which are really hard to beat, and then much later and many levels stronger coming back and beating those same enemies very easily. It really gives a sense of having become stronger in the process. It gives a concrete feeling of accomplishment. Remove that, and it will feel futile and useless, like nothing has really been accomplished. The game may also become a bit boring because all enemies are essentially the same, and there is little variation.

One possibility would be if only enemies that the player has yet not encountered before would match the player's level (give or take a few notches), but after they have been encountered the first time in that particular zone they will remain that level for the rest of the game (in that zone). I don't know if this has been attempted in any existing game. It could be an idea worth trying.

All in all, it's not an easy problem to solve. There are always compromises and problems left with all attempted solutions.

How can 1+2+3+4+... = -1/12?

There's this assertion that has become somewhat famous, as many YouTube videos have been made about it, that the infinite sum of all positive integers equals -1/12. Most people just can't accept it because it seems completely nonsensical and counter-intuitive.

One has to understand, however, that this is not just a trick, or a quip, or some random thing that someone came up at a whim. In fact, historical world-famous mathematicians came to that same conclusion independently of each other, using quite different methodologies. For example, some of the most famous mathematicians of all history, including Leonhard Euler, Bernhard Riemann and Srinivasa Ramanujan, all came to that same result, independently, and using different methods. They didn't just assign the value -1/12 to the sum arbitrarily at a whim, but they had solid mathematical reasons to arrive to that precise value and not something else.

And it is not the only such infinite sum with a counter-intuitive result. There are infinitely many of them. There is an entire field of mathematics dedicated to studying such divergent series. A simple example would be the sum of all the powers of 2:

1 + 2 + 4 + 8 + 16 + 32 + ... = -1

Most people would immediately protest to that assertion. Adding two positive values gives a positive value. How can adding infinitely many positive values not only not give infinity, but a negative value? That's completely impossible!

The problem is that we tend to instinctively think of infinite sums only in terms of its partial finite sums, and the limit that these partial sums approach when more and more terms are added to it. However, this is not necessarily the correct approach. The above sum is not a limit statement, nor is it some kind of finite sum. It's a sum with an infinite number of terms, and partial sums and limits do not apply to it. It's a completely different beast altogether.

Consider the much less controversial statement:

1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... = 2

ie. the sum of the reciprocals of the powers of 2. Most people would agree that the above sum is valid. But why?

To understand why I'm asking the question, notice that the above sum is not a limit statement. In other words, it's not:


This limit is a rather different statement. It is saying that as more and more terms are added to the (finite) sum, the result approaches 2. Note that it never reaches 2, only that it approaches it more and more, as more terms are added.

If it never reaches 2, how can we say that the infinite sum 1 + 1/2 + 1/4 + ... is equal to 2? Not that it just approaches 2, but that it's mathematically equal to it? Philosophical objections to that statement could ostensibly be made. (How can you sum an infinite amount of terms? That's impossible. You would never get to the end of it, because there is no end. The terms just go on and on forever; you would never be done. It's just not possible to sum an infinite number of terms.)

Ultimately, the notation 1 + 1/2 + 1/4 + ... = 2 is a convention. A consensus that mathematics has agreed upon. In other words, we accept the notion that a sum can have an infinite number of terms (regardless of some philosophical objections that could be presented against that idea), and that such an infinite sum can be mathematically equal to a given finite value.

While in the case of convergent series the result is the same as the equivalent limit statement, we cannot use the limit method with divergent series. As much as people seem to accept "infinity" as some kind of valid result, technically speaking it's a nonsensical result, when we are talking about the natural (or even the real) numbers. It's meaningless.

It could well be that divergent sums simply don't have a value, and this may have been what was agreed upon. Just like 0/0 has no value, and no sensible value can be assigned to it, likewise a divergent sum doesn't have a value.

However, it turns out that's not the case. When using certain summation methods, sensible finite values can be assigned to divergent infinite sums. And these methods are not arbitrarily decided on a whim, but they have a strong mathematical reasoning for them. And, moreover, different independent summation methods reach the same result.

We have to understand that a sum with an infinite number of terms just doesn't behave intuitively. It does not necessarily behave like its finite partial sums. The archetypal example often given is:

1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + ... = pi/4

Every term in the sum is a rational number. The sum of two rational numbers gives a rational number. No matter how many rational numbers you add, you always get a rational number. Yet this infinite sum does not give a rational number, but an irrational one. The infinite sum does not behave like its partial sums, nor does it follow the same rules. In other words:

"The sum of two rational numbers gives a rational number": Not necessarily true for infinite sums.

"The sum of two positive numbers gives a positive number": Not necessarily true for infinite sums.

Even knowing all this, you may still have hard time accepting that an infinite divergent sum of positive values not only gives a finite result, but a negative one. We are so strongly attached to the notion of dealing with infinite sums in terms of its finite partial sums that it's hard for us to put aside that approach completely. It's hard to accept that infinite sums do not behave the same as finite sums, nor can they be approached using the same methods.

In the end, it's a question of which mathematical methods you accept on a philosophical level. Just consider that these divergent infinite sums and their finite results are serious methods used by serious professional mathematicians, not just some trickery or wordplay.

More information about this can be found eg. at Wikipedia.

Video games: Why you shouldn't listen to the hype

Consider the recent online multiplayer video game Evolve. It was nominated for six awards at E3, at the Game Critics Awards event. It won four of them (Best of the Show, Best Console Game, Best Action Game and Best Online Multiplayer). Also at Gamescom 2014 it was named the Best Game, Best Console Game Microsoft Xbox, Best PC Game and Best Online Multiplayer Game. And that's just to name a few (it has been reported that the game received more than 60 awards in total.)

Needless to say, the game was massively hyped before release. Some commenters were predicting it to be one of the defining games of the current generation. A game that would shape online multiplayer gaming.

After release, many professional critics praised the game. For example, IGN scored the game 9 out of 10, which is an almost perfect score. Game Informer gave it a score of 8.5 out of 10, and PC Gamer (UK) an 83 out of 100.

(Mind you, these reviews are always really rushed. In most cases reviews are published some days prior to the game's launch. Even when there's an embargo by game publishers, the reviews are rushed to publication on the day of launch or at most a few days later. No publication wants to be late to the party, and they want to inform their readers as soon as they possibly can. With online multiplayer games this can backfire spectacularly because the reviewers cannot possibly know how such a game will pan out when released to the wider public.)

So what happened?

Extreme disappointment by gamers, that's what. Within a month the servers were pretty much empty, and you were lucky if you were able to play a match with competent players. Or anybody at all.

It turns out the game was much more boring, and much smaller in scope, that the hype had led people to believe. And it didn't exactly help that the publisher got greedy and rid the game with outrageously expensive and sometimes completely useless DLC. (For example getting one additional monster to play, something that normally would be just given from the start in this kind of game, cost $15. Many completely useless extremely minor cosmetic DLC, such as a weapon with a different texture, but otherwise identical in functionality to existing weapons, cost $2.)

In retrospect, many reviewers have considered Evolve to be one of the most disappointing games of 2015, which didn't live up even close to its pre-launch hype.

What does this tell us? That pre-launch awards mean absolutely nothing, especially when we are talking about online multiplayer games. Pre-launch hype means absolutely nothing, and shouldn't be believed.

Steam Controller second impressions

I wrote earlier a "fist impressions" blog post, about a week or two after I bought the Steam Controller. Now, several months later, here are my impressions with more actual experience using the controller.

It turns out that the controller is a bit of a mixed bag. With some games it works and feels great, much better than a traditional (ie. Xbox 360 style) gamepad, in other games not so much. The original intent of the controller was to be a complete replacement of a traditional gamepad, and even the keyboard+mouse mode of control (although to be fair it was never claimed that it would be as good as keyboard+mouse, only that it would be good enough as a replacement, so that you could play while sitting on a couch, rather than having to sit at a desk). With some games it fulfills that role, with others not really.

When it works, it works really well, and I much prefer it over a traditional gamepad. Most usually this is the case with games that are primarily designed for gamepads, but support gamepad and mouse simultaneously (mouse for turning the camera, gamepad for everything else). In this kind of game, especially ones that require even a modicum of accurate aiming, the right trackpad feels so much better than a traditional thumbstick, especially when coupled with gyro aiming. (Obviously at first it takes a bit of getting used to, but once you do, it becomes really fluent and natural.)

As an example, I'm currently playing Rise of the Tomb Raider. For the sake of experimentation, I tried to play the game both with an Xbox 360 gamepad and the Steam Controller, and I really prefer the latter. Even with many years of experience with the former, aiming with a thumbstick is always so awkward and difficult, and the trackpad + gyro make it so much easier and fluent. Also turning around really fast is difficult with a thumbstick (because turning speed necessarily has an upper limit), while a trackpad has in essence no such limitation. You can turn pretty much as fast as you physically can (although the edge of the trackpad is the only limiting factor on how much you can turn in one sweep; however turning speed is pretty much unlimited.)

Third-person perspective games designed primarily to be played with a gamepad are one thing, but how about games played from first-person perspective? It really depends on the game. In my experience the Steam Controller can never reach the same level of fluency, ease and accuracy as the mouse, but with some games it can reach a high-enough degree that playing the game is very comfortable and natural. Portal 2 is a perfect example.

If I had to rate the controller on a scale from 0 to 10, where 10 represents keyboard+mouse, 0 represents something almost unplayable (eg. keyboard only), and 5 represents an Xbox 360 controller, I would put the Steam Controller around 8. Although as said, it depends on the game.

There are some games, even those primarily designed to be played with a gamepad, where the Steam Controller does not actually feel better than a traditional gamepad, but may even feel worse.

This is most often the case with games that do not support gamepad + mouse at the same time, and will only accept gamepad input only. In this case the right thumbstick needs to be emulated with the trackpad. And this seldom works fluently.

The pure "thumbstick emulation" mode just can't compete with an actual thumbstick, because it lacks that force feedback that the springlike mechanism of an actual thumbstick has. When you use a thumbstick, you get tactile feedback on which direction you are pressing, and you get physical feedback on how far you are pressing. The trackpad just lacks this feedback, which makes it less functional.

The Steam Controller also has a "mouse joystick" mode, in which you can emulate the thumbstick, but control it like it were a trackpad/mouse instead. In other words, in principle it works like it were an actual trackpad, using the same kind of movements. This works to an extent, but it's necessarily limited. One of the major reasons is what I mentioned earlier: With a real trackpad control there is no upper limit to your turning speed. However, since a thumbstick has by necessity an upper limit, this emulation mode has that as well. Therefore when you instinctively try to turn faster than a certain threshold, it just won't, so it feels unnatural and awkward, like it had an uncomfortable negative acceleration. Even if you crank the thumbstick sensitivity to maximum within the game, it never fully works. There's always that upper limit, destroying the illusion of the mouse emulation.

With some games it just feels more comfortable and fluent to use the traditional gamepad. Two examples of this are Dreamfall Chapters and Just Cause 2.

As for the slightly awkwardly positioned ABXY buttons, I always suspected that one gets used to them with practice, and I wasn't wrong. The more you use the controller, the less difficult it becomes to use those four buttons. I still wish they were more conveniently placed, but it's not that bad.

So what's my final verdict? Well, I like the controller, and I do not regret buying it. Yes, there are some games where the Xbox360-style controller feels and works better, but likewise there are many games where it's the other way around, and with those the Steam controller feels a lot more versatile and comfortable (especially in terms of aiming, which is usually significantly easier).

When video game critics and I disagree

Every year, literally thousands of new video games are published. Even if we discard completely sub-par amateur trash, we are still talking about several hundreds of video games every year that could potentially be very enjoyable to play. It is, of course, impossible to play them all, not to talk about it being really expensive. There are only so many games one has the physical time to play.

So how to choose which games to buy and play? This is where the job of video game critics comes into play. If a game gets widespread critical acclaim, there's a good chance that it will be really enjoyable. If it gets negative reviews, there's a good chance that the game is genuinely bad and unenjoyable.

A good chance. But only that.

And that's sometimes the problem. Sometimes I buy a game expecting it to be really great because it has received such universal acclaim, only to find out that it's so boring or so insufferable that I can't even finish it. Sometimes such games even make me stop playing them in record time. (As I have commented many times in previous blog posts, I hate leaving games unfinished. I often even grind through unenjoyable games just to get them finished, because I hate so much leaving them unfinished. A game has to be really, really bad for me to give up. It happens relatively rarely.)

As an example, Bastion is a critically acclaimed game, with very positive reviews both from critics and the general gaming public. I could play it for two hours before I had to stop. Another example is Shovel Knight. The same story repeats, but this time I could only play for 65 minutes. Especially the latter was so frustrating that I couldn't bother to play it. (And it's not a question of it being "retro", or 2D, or difficult. I like difficult 2D platformer games when they are well made. For example, I loved Ori and the Blind Forest, as well as Aquaria, Xeodrifter and Teslagrad.)

Sometimes it happens in the other direction. As an example, I just love the game Beyond: Two Souls for the PS4. When I started playing it, it was so engaging that I played about 8 hours in one sitting. I seldom do that. While the game mechanics are in some aspects a bit needlessly limited, that's only a very small problem in an otherwise excellent game.

Yet this game has received quite mixed reviews, with some reviewers being very critical of it. For example, quoting Wikipedia:
IGN gaming website criticised the game for offering a gaming experience too passive and unrewarding and a plot too muddy and unfocused. Joystiq criticised the game's lack of solid character interaction and its unbelievable, unintentionally silly plot. Destructoid criticised the game's thin character presentation and frequent narrative dead ends, as well as its lack of meaningful interactivity. Ben "Yahtzee" Croshaw of Zero Punctuation was heavily critical of the game, focusing on the overuse of quick time events, the underuse of the game's central stealth mechanics, and the inconsistent tone and atmosphere.
And:
In November 2014, David Cage discussed the future of video games and referred to the generally negative reviews Beyond received from hardcore gamers.
Needless to say, I completely disagree with those negative reviews. If I had made my purchase decision (or, in this case, the decision not to purchase) based on these reviews, I would have missed one of the best games I have ever played. And that would have been a real shame.

This is a real dilemma. How would I know if I would enjoy, or not enjoy, a certain game? I can mostly rely only on reviews, but sometimes I find out that I completely disagree with them. This both makes me buy games that I don't enjoy, and probably makes me miss games that I would enjoy a lot.

The genius of Doom and Quake

I have previously written about id Software, and wondered what happened to them. They used to be pretty much at the top of the PC game developers (or at the very least, part of the top elite). Their games were extremely influential in PC gaming, especially the first-person shooter genre, and they were pretty much the company that made the genre what it is today. While perhaps not necessarily the first ones to invent all the ideas, they definitely invented a lot of them, and did it right, and their games are definitely the primary source from which other games of the genre got their major game mechanics. Most action-oriented (and even many non-action oriented) first-person shooter games today use most of the same basic gameplay designs that Doom and especially Quake invented, or at least helped popularize.

But what made them so special and influential? Let me discuss a few of these things.

We have to actually start from id Software's earlier game, Wolfenstein 3D. The progression is not complete without mentioning it. This game was still quite primitive in terms of the first-person shooter genre, both with severe technical limitations, as well as gameplay limitations, as the genre was still finding out what works and what doesn't. One of the things that Wolfenstein 3D started to do, is to make the first-person perspective game a fast-paced one.

There had been quite many games played from the first-person perspective before Wolfenstein 3D, but the vast majority (if not all) of them were very slow-paced and awkward, and pretty much none of them had the player actually aim by moving the camera (with some possible exceptions). There were already some car and flight simulators and such, but they were not shooters really. Even the airplane shooter genre played from the first-person perspective were usually a bit awkward and sluggish, and they usually lacked that immersion, that real sense of seeing the world from the first-person perspective. (In most cases this was, of course, caused by the technical limitations of the hardware of the time.)

Wolfenstein might not have been the first game that started the idea of a fast-paced shooter from the first-person perspective, where you aim by moving the camera, but it certainly was one of the most influential ones. Although not even nearly as influential as id Software's first huge hit, Doom.

Doom was even more fast-paced, had more and tougher enemies, and was even grittier. (It of course helped that game engine technology had advanced by that point to allow a much grittier environment, with a bit more realism). And gameplay in Doom was fast! It didn't shy away from having the playable character (ie. in practice the "camera") run at a superhuman speed. And it was really a shooter. Tons of enemies (especially with the hardest difficulty), and fast-paced shooting action.

Initially Doom was still experimenting with its control scheme. It may be hard to imagine it today, but originally Doom was controlled with the cursor keys, with the left and right cursors actually turning the camera, rather than strafing. There was, in fact, no strafe buttons at all. There was a button which, when pressed, allowed you to strafe by pressing the left and right cursor keys (ie. a kind of mode switch button), but it was really awkward to use. By this point there was still no concept of the nowadays ubiquitous WASD key scheme (with the A and D keys being for strafing left and right).

In fact, there was no mouse support at all at first. Even later when they added it, it was mostly relegated to a curiosity for most players. As hard as it might be to believe, the concept of actually using the mouse to control the camera had yet not been invented for first-person shooters. It was still thought that you would just use the cursor keys to move forward and back, and turn the camera left and right (ie. so-called "tank controls").

Of course since it was not possible to turn the camera up or down, the need for a mouse to control the camera was less.

Strafing in first-person shooters is nowadays an essential core mechanic, but not back in the initial years of Doom.

Quake was not only a huge step forward in terms of technology, but also in terms of gameplay and the control scheme. However, once again, as incredible as it might sound, the original Quake still was looking for that perfect control scheme that we take so much for granted today. If you were to play the original Quake, the control scheme would still be very awkward. But it was already getting there. (For example, now the mouse could be used by default to turn the camera, but only sideways. You had to press a button to have free look... which nowadays doesn't make much sense.) It wouldn't be until Quake 2 that we get pretty much the modern control scheme (even though with the original version of the game the default controls still were a bit awkward, but mostly configurable to a modern standard setup.)

Quake could, perhaps, be considered the first modern first-person shooter (other than for its awkward control scheme; which was fixed in later iterations, mods and ports). It was extremely fast-paced, with tons of enemies, tons of shooting, tons of explosions and tons of action. It also pretty much established the keyboard+mouse control scheme as the standard scheme for the genre. Technically it of course looks antiquated (after all, the original Quake wasn't even hardware-accelerated; everything was rendered using the CPU, which in that time meant the Pentium. The original one. The 60-MHz one.) However, the gameplay is very reminiscent of a modern FPS game.

Doom and especially Quake definitely helped define an entire (and huge) genre of PC games (a genre that has been so successful, that it has even become a staple of game consoles, even though you don't have a keyboard and mouse there.) They definitely did many things right, and have their important place in the history of video games.