I bought myself a New Nintendo 3DS (not just a brand new one, but the device named "New Nintendo 3DS"). I had been for quite a long time pondering about buying a 3DS, and the new version of the console finally gave me the impetus of buying one.
One of the major new hardware features of the console is better 3D via eye-tracking. The original 3DS has a parallax barrier 3D screen (which in practice means that the 3D effect is achieved without the need of any kind of glasses). Its problem is that it requires a really precise horizontal viewing angle to work properly. Deviate from it even a slight bit, and the visuals will glitch. The New 3DS improves on this by using eyetracking, and adjusting the parallax barrier offset in real time depending on the position of the eyes. This way you can move the device or your head relatively freely, and the 3D effect is not broken. It works surprisingly well. Sometimes the device loses track of your eyes and the visuals will glitch, but they fix themselves when it gets the track again. (This happens mostly if you eg. look away and then back at the screen. It seldom happens while playing. It sometimes does, especially if the lighting conditions change, eg. if you are sitting in a car or bus and the position of the sun changes rapidly, but overall it happens rarely.)
There is, however, another limitation to the parallax barrier technology, which even eyetracking can't fix: The distance between your eyes and the screen has to be within a certain range. Any closer or farther, and the visuals will start glitching once again. There is quite some leeway in this case, ie. this range is relatively large, so it's not all that bad. And the range is at a relatively comfortable distance.
Curiously, the strength of the 3D effect can be adjusted with a slider.
Some people can't use the 3D screen because of physiological reasons. Headaches are the most common symptom. For me this is not a problem, and I really like to play with full depth.
There are a few situations, however, where it's nicer to just turn the 3D effect off. For instance if you would really like to look at the screen up close. Or quite far (like when sitting on a chair, with the console on your lap, quite far away from your eyes.) Or for whatever other reason.
The 3D effect can be turned completely off, which makes the screen completely 2D, with no 3D effect of any kind: Just slide the slider all the way down, and it turns the 3D effect completely off.
Except it didn't do that! Or so I thought for one and a half year.
You see, whenever the 3D effect is turned on, no matter how small the depth setting, the minimum/maximum eye distance problem would always be there. If you eg. look at the screen too closely, it would glitch, even if so slightly, and quite annoyingly. With a lower depth setting the glitching is significantly less, but it's still noticeable. Some uneven jittering and small blinking happens if you move the device or your head at all, when you are looking at the screen from too close (or too far).
Even though I put the 3D slider all the way down, the artifacts were still there. For the longest time I thought that this might be some kind of limitation of the technology: Even though it claimed that the 3D could be turned off, it wasn't really possible fully.
But the curious thing was that if I played any 2D game, with no 3D support, the screen would actually be pure 2D, without any of the 3D artifacts and glitching in any way, shape or form. It would look sharp and clean, with no jittering, subtle blinking, or anything. This was puzzling to say the least. Clearly the technology is capable of showing pure and clean 2D, with the 3D effect turned completely off. But for some reason in 3D games this couldn't be achieved with the 3D slider.
Or so I thought.
I recently happened to stumble across an online discussion about turning off the 3D effect in the 3DS, and one poster mentioned that in the 3DS XL there's a notch, or "bump", at the lower end of the slider, so that you have to press it a bit harder, and it will get locked into place.
This was something I didn't know previously, but somehow this still didn't light a bulb in my head. However, incidentally, when I was playing with the 3D slider, I happened to notice, when I looked at it from a better angle, that the slider wasn't actually going all the way down. There was a tiny space between the slider and the end of the slit where it slides, even though I thought I had put it all the way down.
Then it dawned on me, and I remembered that online discussion (which I had read just some minutes earlier): I pressed the slider a bit harder, and it clicked into its bottommost position. And the screen became visibly pure and clean 2D.
I couldn't help but laugh at myself. "OMFG, I have been using this damn thing for a year and a half, and this is the first time I notice this?" (Ok, I didn't think exactly that, but pretty much the sentiment was that.)
So yeah, the devil is in the details. And sometimes we easily miss those details.
I have to wonder how many other people don't notice nor realize this.
Saturday, July 16, 2016
Thursday, July 7, 2016
Which chess endgame is the hardest?
In chess, many endgame situations with only very few pieces left can be the bane of beginner players. And even sometimes very experienced players as well. But which endgame situation could be considered the hardest of all? This is a difficult (and in many ways ambiguous) question, but here are some ideas.
Perhaps we should start with one of the easiest endgames in chess. One of the traditional endgame situations that every beginner player is taught almost from the very start:
This is an almost trivial endgame which any player would be able to play in their dreams. However, let's make a small substitution:
Now it suddenly became significantly more difficult (and one of the typical banes of beginners). In fact, the status of this position depends on whose move it is. If it's white to move, white can win in 23 moves (with perfect play from both sides). However, if it's black to move, it's a draw, but it requires very precise moves from black. (In fact, there is only one move in this position for black to draw; every other move loses, if white plays perfectly.) These single-pawn endings can be quite tricky at times.
But this is by far not the hardest possible endgame. There are notoriously harder ones, such as the queen vs. rook endgame:
Here white can win in 28 moves (with perfect play from both sides), but it requires quite tricky maneuvering.
Another notoriously difficult endgame is king-bishop-knight vs. king:
Here white can also win in 28 moves (with perfect play), but it requires a very meticulous algorithm to do so.
But all these are child's play compared to the probably hardest possible endgame. To begin with, king-knight-knight vs. king is (excepting very few particular positions) a draw:
But add one single pawn, and it's victory for white. But not a white pawn. A black pawn! And it can be introduced in quite many places, and white will win. As incredible as that might sound:
Yes, adding a black pawn means that white can now win, as hard as that is to imagine.
(The reason for this is that the winning strategy requires white to maneuver the black king into a position where it has no legal moves, several times. Without the black pawn this would be stalemate. However, thanks to the black pawn, which black is then forced to move, the stalemate is avoided and white can proceed with the mating sequence.)
But this is a notoriously difficult endgame. So difficult, that even most chess grandmasters may have difficulty with it (and many might not even know how to do it). It is so difficult that it requires quite a staggering amount of moves: With perfect play from both sides, white can win in 93 moves.
Perhaps we should start with one of the easiest endgames in chess. One of the traditional endgame situations that every beginner player is taught almost from the very start:
This is an almost trivial endgame which any player would be able to play in their dreams. However, let's make a small substitution:
Now it suddenly became significantly more difficult (and one of the typical banes of beginners). In fact, the status of this position depends on whose move it is. If it's white to move, white can win in 23 moves (with perfect play from both sides). However, if it's black to move, it's a draw, but it requires very precise moves from black. (In fact, there is only one move in this position for black to draw; every other move loses, if white plays perfectly.) These single-pawn endings can be quite tricky at times.
But this is by far not the hardest possible endgame. There are notoriously harder ones, such as the queen vs. rook endgame:
Here white can win in 28 moves (with perfect play from both sides), but it requires quite tricky maneuvering.
Another notoriously difficult endgame is king-bishop-knight vs. king:
Here white can also win in 28 moves (with perfect play), but it requires a very meticulous algorithm to do so.
But all these are child's play compared to the probably hardest possible endgame. To begin with, king-knight-knight vs. king is (excepting very few particular positions) a draw:
But add one single pawn, and it's victory for white. But not a white pawn. A black pawn! And it can be introduced in quite many places, and white will win. As incredible as that might sound:
Yes, adding a black pawn means that white can now win, as hard as that is to imagine.
(The reason for this is that the winning strategy requires white to maneuver the black king into a position where it has no legal moves, several times. Without the black pawn this would be stalemate. However, thanks to the black pawn, which black is then forced to move, the stalemate is avoided and white can proceed with the mating sequence.)
But this is a notoriously difficult endgame. So difficult, that even most chess grandmasters may have difficulty with it (and many might not even know how to do it). It is so difficult that it requires quite a staggering amount of moves: With perfect play from both sides, white can win in 93 moves.
"Tank controls" in video games
3D games are actually surprisingly old. Technically speaking some games of the early 1970's were 3D, meaning they used perspective projection and had, at least technically speaking, three axes of movement. (Obviously back in those days they were nothing more than vector graphics drawn using lines and sprites, but technically speaking they were 3D, as contrasted to purely 2D games where everything happens on a plane.) I'm not talking here about racing games that give a semi-illusion of depth by having the picture of a road going to the horizon and sprites of different sizes, but actual 3D games using perspective projection of rotateable objects.
As technology advanced, so did the 3D games. The most popular 3D games of the 80's were mostly flight simulators and racing games (which used actual rotateable and perspective-projected 3D polygons), although there were obviously attempts at some other genres as well even back then. It's precisely these types of games, ie. flight simulators and anything that could be viewed as a derivative, that seemed most suitable for 3D gaming in the early days.
It is perhaps because of this that one aspect of 3D games was really pervasive for years and decades to come: The control system.
What is the most common control system for simple flight simulators and 3D racing games? The so-called "tank controls". This means that there's a "forward" button to go forward, a "back" button to go backwards, and "left" and "right" buttons to turn the vehicle (ie. in practice the "camera") left and right. This was the most logical control system for such games. After all, you can't have a plane or a car moving sideways, because they just don't move like that in real life either. Basically every single 3D game of the 80's and well into the 90's used this control scheme. It was the most "natural" and ubiquitous way of controlling a 3D game.
Probably because of this, and unfortunately, this control scheme was by large "inherited" into all kinds of 3D games, even when the technology was used in other types of games, such as platformers viewed from third-person perspective, and even first-person shooters.
Yes, Wolfenstein 3D, and even the "grandfather" of first-person shooters, Doom, used "tank controls". There was no mouse support by default (I'm not even sure there was support at all, in the first release versions), and the "left" and "right" keys would turn the camera left and right. There was support for strafing (ie. moving sideways while keeping the camera looking forward), but it was very awkward: Rather than having "strafe left" and "strafe right" buttons, Doom instead had a toggle button to make the left and right buttons strafe. (In other words, if you wanted to strafe, you had to press the "strafe" button and, while keeping it pressed, use the left and right buttons. Just like using the shift key to type uppercase letters.) Needless to say, this was so awkward and impractical that people seldom used it.
Of course all kinds of other 3D games used "tank controls" as well, including many of the first 3D platformers, making them really awkward to play.
For some reason it took the gaming industry a really long time to realize that strafing, moving sideways, was a much more natural and convenient way of moving than being restricted to only being able to move back and forward, and turning the camera. Today we take the "WASD" key mapping, with A and D being strafe buttons, for granted, but this is a relatively recent development. As late as early 2000's some games still hadn't transitioned to this more convenient form of controls.
The same goes to game consoles, by the way. "Tank controls" might even have been even more pervasive and widespread there (usually due to the lack of configurable controller button mapping). There, too, it took a relatively long time before strafing became the norm. The introduction of twin stick controllers made this transition much more feasible, but even then it took a relatively long time before it became the standard.
Take, for example, the game Resident Evil 4, released in 2005 for the PlayStation 2 and the GameCube, both of which had twin stick controllers. The game still used tank controls, and had no strafing support at all. This makes the game horribly awkward and frustrating to control; even infuriatingly so. And this even though modern twin-stick controls had already been the norm for years (for example, Halo: Combat Evolved was published in 2001.)
Nowadays "tank controls" are only limited to games and situations where they make sense. This usually means when driving a car or another similar vehicle, and a few other situations.
And not even always even then. Many tank games, perhaps ironically, do not use "tank controls". Instead, you can move the vehicle freely in the direction pressed with the WASD keys or the left controller stick, while keeping the camera fixated in its current direction, and which can be rotated with the mouse or the right controller stick (and which usually in such games makes the tank aim at that direction). In other words, direction of movement and direction of aiming are independent of each other (and usually the tank aims at the direction that the camera is looking). This makes the game a lot more fluent and practical to play.
As technology advanced, so did the 3D games. The most popular 3D games of the 80's were mostly flight simulators and racing games (which used actual rotateable and perspective-projected 3D polygons), although there were obviously attempts at some other genres as well even back then. It's precisely these types of games, ie. flight simulators and anything that could be viewed as a derivative, that seemed most suitable for 3D gaming in the early days.
It is perhaps because of this that one aspect of 3D games was really pervasive for years and decades to come: The control system.
What is the most common control system for simple flight simulators and 3D racing games? The so-called "tank controls". This means that there's a "forward" button to go forward, a "back" button to go backwards, and "left" and "right" buttons to turn the vehicle (ie. in practice the "camera") left and right. This was the most logical control system for such games. After all, you can't have a plane or a car moving sideways, because they just don't move like that in real life either. Basically every single 3D game of the 80's and well into the 90's used this control scheme. It was the most "natural" and ubiquitous way of controlling a 3D game.
Probably because of this, and unfortunately, this control scheme was by large "inherited" into all kinds of 3D games, even when the technology was used in other types of games, such as platformers viewed from third-person perspective, and even first-person shooters.
Yes, Wolfenstein 3D, and even the "grandfather" of first-person shooters, Doom, used "tank controls". There was no mouse support by default (I'm not even sure there was support at all, in the first release versions), and the "left" and "right" keys would turn the camera left and right. There was support for strafing (ie. moving sideways while keeping the camera looking forward), but it was very awkward: Rather than having "strafe left" and "strafe right" buttons, Doom instead had a toggle button to make the left and right buttons strafe. (In other words, if you wanted to strafe, you had to press the "strafe" button and, while keeping it pressed, use the left and right buttons. Just like using the shift key to type uppercase letters.) Needless to say, this was so awkward and impractical that people seldom used it.
Of course all kinds of other 3D games used "tank controls" as well, including many of the first 3D platformers, making them really awkward to play.
For some reason it took the gaming industry a really long time to realize that strafing, moving sideways, was a much more natural and convenient way of moving than being restricted to only being able to move back and forward, and turning the camera. Today we take the "WASD" key mapping, with A and D being strafe buttons, for granted, but this is a relatively recent development. As late as early 2000's some games still hadn't transitioned to this more convenient form of controls.
The same goes to game consoles, by the way. "Tank controls" might even have been even more pervasive and widespread there (usually due to the lack of configurable controller button mapping). There, too, it took a relatively long time before strafing became the norm. The introduction of twin stick controllers made this transition much more feasible, but even then it took a relatively long time before it became the standard.
Take, for example, the game Resident Evil 4, released in 2005 for the PlayStation 2 and the GameCube, both of which had twin stick controllers. The game still used tank controls, and had no strafing support at all. This makes the game horribly awkward and frustrating to control; even infuriatingly so. And this even though modern twin-stick controls had already been the norm for years (for example, Halo: Combat Evolved was published in 2001.)
Nowadays "tank controls" are only limited to games and situations where they make sense. This usually means when driving a car or another similar vehicle, and a few other situations.
And not even always even then. Many tank games, perhaps ironically, do not use "tank controls". Instead, you can move the vehicle freely in the direction pressed with the WASD keys or the left controller stick, while keeping the camera fixated in its current direction, and which can be rotated with the mouse or the right controller stick (and which usually in such games makes the tank aim at that direction). In other words, direction of movement and direction of aiming are independent of each other (and usually the tank aims at the direction that the camera is looking). This makes the game a lot more fluent and practical to play.
The origins of the "Lambada" song
The song "Lambada" by the pop group Kaoma, when released in 1989, was one of these huge hits that people started hating almost as soon as it hit the radio stations, mainly because of being overplayed everywhere.
Back then, its composition was generally misattributed to Kaoma themselves. It wasn't until much later that I heard that was actually just a cover song, not an original one. However, it's actually a bit more interesting than that.
There are, of course constantly hugely popular hits that turn out to be just cover songs by somebody else eg. from the 50's or 60's. This one doesn't go that far back, but it's still interesting.
The original version, "Llorando se Fue" was composed by the Bolivian band Los Kjarkas in 1981. It's originally in Spanish, and while the melody is (almost) the same, the tone is quite different. It uses panflutes, is a bit slower, and is overall very Andean in tone.
See it on YouTube.
This song was then covered by the Peruvian band Cuarteto Continental in 1984. They substituted the panflute with the accordion, already giving it the distinctive tone, and their version is more upbeat and syncopated.
See it on YouTube.
The song was then covered by Márcia Ferreira in 1986. This was an unauthorized version translated to Portuguese, is a bit faster, and emphasizes the syncopation, and is basically identical to the Kaoma version, which was made in 1989.
See it on YouTube.
The Kaoma version, which is by far the best known one, perhaps emphasizes the percussion, and the syncopation even more.
See it on YouTube.
Back then, its composition was generally misattributed to Kaoma themselves. It wasn't until much later that I heard that was actually just a cover song, not an original one. However, it's actually a bit more interesting than that.
There are, of course constantly hugely popular hits that turn out to be just cover songs by somebody else eg. from the 50's or 60's. This one doesn't go that far back, but it's still interesting.
The original version, "Llorando se Fue" was composed by the Bolivian band Los Kjarkas in 1981. It's originally in Spanish, and while the melody is (almost) the same, the tone is quite different. It uses panflutes, is a bit slower, and is overall very Andean in tone.
See it on YouTube.
This song was then covered by the Peruvian band Cuarteto Continental in 1984. They substituted the panflute with the accordion, already giving it the distinctive tone, and their version is more upbeat and syncopated.
See it on YouTube.
The song was then covered by Márcia Ferreira in 1986. This was an unauthorized version translated to Portuguese, is a bit faster, and emphasizes the syncopation, and is basically identical to the Kaoma version, which was made in 1989.
See it on YouTube.
The Kaoma version, which is by far the best known one, perhaps emphasizes the percussion, and the syncopation even more.
See it on YouTube.
The dilemma of difficulty in (J)RPGs
The standard game mechanic that has existed since the dawn of (J)RPGs is that all enemies within given zones have a certain strength (which often varies randomly, but only within a relatively narrow range.) The first zone you start in has the weakest enemies, and they get progressively stronger as you advance in the game and get to new zones.
The idea is, of course, that as the player gains strength from battles (which is the core game mechanic of RPGs), it becomes easier and easier for the player to beat those enemies, and stronger enemies ahead will make the game challenging, as it requires the player to level up in order to be able to beat them. If you ever come back to a previous zone, the enemies there will still be as strong as they were last time, which usually means that they become easier and easier as the player becomes stronger.
This core mechanic, however, has a slight problem: It allows the player to over-level, which will cause the game to become too easy and there not being a meaningful challenge anymore. Nothing is stopping the player, if he so chooses, to spent a big chunk of time in one zone leveling up and gaining strength, after which most of the next zones become trivial because the strength of the enemies are not in par. This may be so even for the final boss of the game.
The final boss is supposed to be the ultimate challenge, the most difficult fight in the entire game. However, if because of the player being over-leveled the final boss becomes trivial, it can feel quite anti-climactic.
This is not just theoretical. It does happen. Two examples of where it has happened to me are Final Fantasy 6 and Bravely Default. At some point by the end parts of both games I got hooked into leveling up... after which the rest of the game until the end became really trivial and unchallenging. And a bit anti-climactic.
One possible solution to this problem that some games have tried is to have enemies level up with the player. This way they always remain challenging no matter how much the player levels up.
At first glance this might sound like a good idea, but it's not ideal either. The problem with this is that it removes the sense of accomplishment from the game; the sense of becoming stronger. It removes that reward of having fought lots of battles and getting stronger from them. There is no sense of achievement. Leveling up becomes pretty much inconsequential.
It's quite rewarding to fight some really tough enemies which are really hard to beat, and then much later and many levels stronger coming back and beating those same enemies very easily. It really gives a sense of having become stronger in the process. It gives a concrete feeling of accomplishment. Remove that, and it will feel futile and useless, like nothing has really been accomplished. The game may also become a bit boring because all enemies are essentially the same, and there is little variation.
One possibility would be if only enemies that the player has yet not encountered before would match the player's level (give or take a few notches), but after they have been encountered the first time in that particular zone they will remain that level for the rest of the game (in that zone). I don't know if this has been attempted in any existing game. It could be an idea worth trying.
All in all, it's not an easy problem to solve. There are always compromises and problems left with all attempted solutions.
The idea is, of course, that as the player gains strength from battles (which is the core game mechanic of RPGs), it becomes easier and easier for the player to beat those enemies, and stronger enemies ahead will make the game challenging, as it requires the player to level up in order to be able to beat them. If you ever come back to a previous zone, the enemies there will still be as strong as they were last time, which usually means that they become easier and easier as the player becomes stronger.
This core mechanic, however, has a slight problem: It allows the player to over-level, which will cause the game to become too easy and there not being a meaningful challenge anymore. Nothing is stopping the player, if he so chooses, to spent a big chunk of time in one zone leveling up and gaining strength, after which most of the next zones become trivial because the strength of the enemies are not in par. This may be so even for the final boss of the game.
The final boss is supposed to be the ultimate challenge, the most difficult fight in the entire game. However, if because of the player being over-leveled the final boss becomes trivial, it can feel quite anti-climactic.
This is not just theoretical. It does happen. Two examples of where it has happened to me are Final Fantasy 6 and Bravely Default. At some point by the end parts of both games I got hooked into leveling up... after which the rest of the game until the end became really trivial and unchallenging. And a bit anti-climactic.
One possible solution to this problem that some games have tried is to have enemies level up with the player. This way they always remain challenging no matter how much the player levels up.
At first glance this might sound like a good idea, but it's not ideal either. The problem with this is that it removes the sense of accomplishment from the game; the sense of becoming stronger. It removes that reward of having fought lots of battles and getting stronger from them. There is no sense of achievement. Leveling up becomes pretty much inconsequential.
It's quite rewarding to fight some really tough enemies which are really hard to beat, and then much later and many levels stronger coming back and beating those same enemies very easily. It really gives a sense of having become stronger in the process. It gives a concrete feeling of accomplishment. Remove that, and it will feel futile and useless, like nothing has really been accomplished. The game may also become a bit boring because all enemies are essentially the same, and there is little variation.
One possibility would be if only enemies that the player has yet not encountered before would match the player's level (give or take a few notches), but after they have been encountered the first time in that particular zone they will remain that level for the rest of the game (in that zone). I don't know if this has been attempted in any existing game. It could be an idea worth trying.
All in all, it's not an easy problem to solve. There are always compromises and problems left with all attempted solutions.
How can 1+2+3+4+... = -1/12?
There's this assertion that has become somewhat famous, as many YouTube videos have been made about it, that the infinite sum of all positive integers equals -1/12. Most people just can't accept it because it seems completely nonsensical and counter-intuitive.
One has to understand, however, that this is not just a trick, or a quip, or some random thing that someone came up at a whim. In fact, historical world-famous mathematicians came to that same conclusion independently of each other, using quite different methodologies. For example, some of the most famous mathematicians of all history, including Leonhard Euler, Bernhard Riemann and Srinivasa Ramanujan, all came to that same result, independently, and using different methods. They didn't just assign the value -1/12 to the sum arbitrarily at a whim, but they had solid mathematical reasons to arrive to that precise value and not something else.
And it is not the only such infinite sum with a counter-intuitive result. There are infinitely many of them. There is an entire field of mathematics dedicated to studying such divergent series. A simple example would be the sum of all the powers of 2:
1 + 2 + 4 + 8 + 16 + 32 + ... = -1
Most people would immediately protest to that assertion. Adding two positive values gives a positive value. How can adding infinitely many positive values not only not give infinity, but a negative value? That's completely impossible!
The problem is that we tend to instinctively think of infinite sums only in terms of its partial finite sums, and the limit that these partial sums approach when more and more terms are added to it. However, this is not necessarily the correct approach. The above sum is not a limit statement, nor is it some kind of finite sum. It's a sum with an infinite number of terms, and partial sums and limits do not apply to it. It's a completely different beast altogether.
Consider the much less controversial statement:
1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... = 2
ie. the sum of the reciprocals of the powers of 2. Most people would agree that the above sum is valid. But why?
To understand why I'm asking the question, notice that the above sum is not a limit statement. In other words, it's not:
This limit is a rather different statement. It is saying that as more and more terms are added to the (finite) sum, the result approaches 2. Note that it never reaches 2, only that it approaches it more and more, as more terms are added.
If it never reaches 2, how can we say that the infinite sum 1 + 1/2 + 1/4 + ... is equal to 2? Not that it just approaches 2, but that it's mathematically equal to it? Philosophical objections to that statement could ostensibly be made. (How can you sum an infinite amount of terms? That's impossible. You would never get to the end of it, because there is no end. The terms just go on and on forever; you would never be done. It's just not possible to sum an infinite number of terms.)
Ultimately, the notation 1 + 1/2 + 1/4 + ... = 2 is a convention. A consensus that mathematics has agreed upon. In other words, we accept the notion that a sum can have an infinite number of terms (regardless of some philosophical objections that could be presented against that idea), and that such an infinite sum can be mathematically equal to a given finite value.
While in the case of convergent series the result is the same as the equivalent limit statement, we cannot use the limit method with divergent series. As much as people seem to accept "infinity" as some kind of valid result, technically speaking it's a nonsensical result, when we are talking about the natural (or even the real) numbers. It's meaningless.
It could well be that divergent sums simply don't have a value, and this may have been what was agreed upon. Just like 0/0 has no value, and no sensible value can be assigned to it, likewise a divergent sum doesn't have a value.
However, it turns out that's not the case. When using certain summation methods, sensible finite values can be assigned to divergent infinite sums. And these methods are not arbitrarily decided on a whim, but they have a strong mathematical reasoning for them. And, moreover, different independent summation methods reach the same result.
We have to understand that a sum with an infinite number of terms just doesn't behave intuitively. It does not necessarily behave like its finite partial sums. The archetypal example often given is:
1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + ... = pi/4
Every term in the sum is a rational number. The sum of two rational numbers gives a rational number. No matter how many rational numbers you add, you always get a rational number. Yet this infinite sum does not give a rational number, but an irrational one. The infinite sum does not behave like its partial sums, nor does it follow the same rules. In other words:
"The sum of two rational numbers gives a rational number": Not necessarily true for infinite sums.
"The sum of two positive numbers gives a positive number": Not necessarily true for infinite sums.
Even knowing all this, you may still have hard time accepting that an infinite divergent sum of positive values not only gives a finite result, but a negative one. We are so strongly attached to the notion of dealing with infinite sums in terms of its finite partial sums that it's hard for us to put aside that approach completely. It's hard to accept that infinite sums do not behave the same as finite sums, nor can they be approached using the same methods.
In the end, it's a question of which mathematical methods you accept on a philosophical level. Just consider that these divergent infinite sums and their finite results are serious methods used by serious professional mathematicians, not just some trickery or wordplay.
More information about this can be found eg. at Wikipedia.
One has to understand, however, that this is not just a trick, or a quip, or some random thing that someone came up at a whim. In fact, historical world-famous mathematicians came to that same conclusion independently of each other, using quite different methodologies. For example, some of the most famous mathematicians of all history, including Leonhard Euler, Bernhard Riemann and Srinivasa Ramanujan, all came to that same result, independently, and using different methods. They didn't just assign the value -1/12 to the sum arbitrarily at a whim, but they had solid mathematical reasons to arrive to that precise value and not something else.
And it is not the only such infinite sum with a counter-intuitive result. There are infinitely many of them. There is an entire field of mathematics dedicated to studying such divergent series. A simple example would be the sum of all the powers of 2:
1 + 2 + 4 + 8 + 16 + 32 + ... = -1
Most people would immediately protest to that assertion. Adding two positive values gives a positive value. How can adding infinitely many positive values not only not give infinity, but a negative value? That's completely impossible!
The problem is that we tend to instinctively think of infinite sums only in terms of its partial finite sums, and the limit that these partial sums approach when more and more terms are added to it. However, this is not necessarily the correct approach. The above sum is not a limit statement, nor is it some kind of finite sum. It's a sum with an infinite number of terms, and partial sums and limits do not apply to it. It's a completely different beast altogether.
Consider the much less controversial statement:
1 + 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... = 2
ie. the sum of the reciprocals of the powers of 2. Most people would agree that the above sum is valid. But why?
To understand why I'm asking the question, notice that the above sum is not a limit statement. In other words, it's not:
This limit is a rather different statement. It is saying that as more and more terms are added to the (finite) sum, the result approaches 2. Note that it never reaches 2, only that it approaches it more and more, as more terms are added.
If it never reaches 2, how can we say that the infinite sum 1 + 1/2 + 1/4 + ... is equal to 2? Not that it just approaches 2, but that it's mathematically equal to it? Philosophical objections to that statement could ostensibly be made. (How can you sum an infinite amount of terms? That's impossible. You would never get to the end of it, because there is no end. The terms just go on and on forever; you would never be done. It's just not possible to sum an infinite number of terms.)
Ultimately, the notation 1 + 1/2 + 1/4 + ... = 2 is a convention. A consensus that mathematics has agreed upon. In other words, we accept the notion that a sum can have an infinite number of terms (regardless of some philosophical objections that could be presented against that idea), and that such an infinite sum can be mathematically equal to a given finite value.
While in the case of convergent series the result is the same as the equivalent limit statement, we cannot use the limit method with divergent series. As much as people seem to accept "infinity" as some kind of valid result, technically speaking it's a nonsensical result, when we are talking about the natural (or even the real) numbers. It's meaningless.
It could well be that divergent sums simply don't have a value, and this may have been what was agreed upon. Just like 0/0 has no value, and no sensible value can be assigned to it, likewise a divergent sum doesn't have a value.
However, it turns out that's not the case. When using certain summation methods, sensible finite values can be assigned to divergent infinite sums. And these methods are not arbitrarily decided on a whim, but they have a strong mathematical reasoning for them. And, moreover, different independent summation methods reach the same result.
We have to understand that a sum with an infinite number of terms just doesn't behave intuitively. It does not necessarily behave like its finite partial sums. The archetypal example often given is:
1 - 1/3 + 1/5 - 1/7 + 1/9 - 1/11 + ... = pi/4
Every term in the sum is a rational number. The sum of two rational numbers gives a rational number. No matter how many rational numbers you add, you always get a rational number. Yet this infinite sum does not give a rational number, but an irrational one. The infinite sum does not behave like its partial sums, nor does it follow the same rules. In other words:
"The sum of two rational numbers gives a rational number": Not necessarily true for infinite sums.
"The sum of two positive numbers gives a positive number": Not necessarily true for infinite sums.
Even knowing all this, you may still have hard time accepting that an infinite divergent sum of positive values not only gives a finite result, but a negative one. We are so strongly attached to the notion of dealing with infinite sums in terms of its finite partial sums that it's hard for us to put aside that approach completely. It's hard to accept that infinite sums do not behave the same as finite sums, nor can they be approached using the same methods.
In the end, it's a question of which mathematical methods you accept on a philosophical level. Just consider that these divergent infinite sums and their finite results are serious methods used by serious professional mathematicians, not just some trickery or wordplay.
More information about this can be found eg. at Wikipedia.
Video games: Why you shouldn't listen to the hype
Consider the recent online multiplayer video game Evolve. It was nominated for six awards at E3, at the Game Critics Awards event. It won four of them (Best of the Show, Best Console Game, Best Action Game and Best Online Multiplayer). Also at Gamescom 2014 it was named the Best Game, Best Console Game Microsoft Xbox, Best PC Game and Best Online Multiplayer Game. And that's just to name a few (it has been reported that the game received more than 60 awards in total.)
Needless to say, the game was massively hyped before release. Some commenters were predicting it to be one of the defining games of the current generation. A game that would shape online multiplayer gaming.
After release, many professional critics praised the game. For example, IGN scored the game 9 out of 10, which is an almost perfect score. Game Informer gave it a score of 8.5 out of 10, and PC Gamer (UK) an 83 out of 100.
(Mind you, these reviews are always really rushed. In most cases reviews are published some days prior to the game's launch. Even when there's an embargo by game publishers, the reviews are rushed to publication on the day of launch or at most a few days later. No publication wants to be late to the party, and they want to inform their readers as soon as they possibly can. With online multiplayer games this can backfire spectacularly because the reviewers cannot possibly know how such a game will pan out when released to the wider public.)
So what happened?
Extreme disappointment by gamers, that's what. Within a month the servers were pretty much empty, and you were lucky if you were able to play a match with competent players. Or anybody at all.
It turns out the game was much more boring, and much smaller in scope, that the hype had led people to believe. And it didn't exactly help that the publisher got greedy and rid the game with outrageously expensive and sometimes completely useless DLC. (For example getting one additional monster to play, something that normally would be just given from the start in this kind of game, cost $15. Many completely useless extremely minor cosmetic DLC, such as a weapon with a different texture, but otherwise identical in functionality to existing weapons, cost $2.)
In retrospect, many reviewers have considered Evolve to be one of the most disappointing games of 2015, which didn't live up even close to its pre-launch hype.
What does this tell us? That pre-launch awards mean absolutely nothing, especially when we are talking about online multiplayer games. Pre-launch hype means absolutely nothing, and shouldn't be believed.
Needless to say, the game was massively hyped before release. Some commenters were predicting it to be one of the defining games of the current generation. A game that would shape online multiplayer gaming.
After release, many professional critics praised the game. For example, IGN scored the game 9 out of 10, which is an almost perfect score. Game Informer gave it a score of 8.5 out of 10, and PC Gamer (UK) an 83 out of 100.
(Mind you, these reviews are always really rushed. In most cases reviews are published some days prior to the game's launch. Even when there's an embargo by game publishers, the reviews are rushed to publication on the day of launch or at most a few days later. No publication wants to be late to the party, and they want to inform their readers as soon as they possibly can. With online multiplayer games this can backfire spectacularly because the reviewers cannot possibly know how such a game will pan out when released to the wider public.)
So what happened?
Extreme disappointment by gamers, that's what. Within a month the servers were pretty much empty, and you were lucky if you were able to play a match with competent players. Or anybody at all.
It turns out the game was much more boring, and much smaller in scope, that the hype had led people to believe. And it didn't exactly help that the publisher got greedy and rid the game with outrageously expensive and sometimes completely useless DLC. (For example getting one additional monster to play, something that normally would be just given from the start in this kind of game, cost $15. Many completely useless extremely minor cosmetic DLC, such as a weapon with a different texture, but otherwise identical in functionality to existing weapons, cost $2.)
In retrospect, many reviewers have considered Evolve to be one of the most disappointing games of 2015, which didn't live up even close to its pre-launch hype.
What does this tell us? That pre-launch awards mean absolutely nothing, especially when we are talking about online multiplayer games. Pre-launch hype means absolutely nothing, and shouldn't be believed.
Steam Controller second impressions
I wrote earlier a "fist impressions" blog post, about a week or two after I bought the Steam Controller. Now, several months later, here are my impressions with more actual experience using the controller.
It turns out that the controller is a bit of a mixed bag. With some games it works and feels great, much better than a traditional (ie. Xbox 360 style) gamepad, in other games not so much. The original intent of the controller was to be a complete replacement of a traditional gamepad, and even the keyboard+mouse mode of control (although to be fair it was never claimed that it would be as good as keyboard+mouse, only that it would be good enough as a replacement, so that you could play while sitting on a couch, rather than having to sit at a desk). With some games it fulfills that role, with others not really.
When it works, it works really well, and I much prefer it over a traditional gamepad. Most usually this is the case with games that are primarily designed for gamepads, but support gamepad and mouse simultaneously (mouse for turning the camera, gamepad for everything else). In this kind of game, especially ones that require even a modicum of accurate aiming, the right trackpad feels so much better than a traditional thumbstick, especially when coupled with gyro aiming. (Obviously at first it takes a bit of getting used to, but once you do, it becomes really fluent and natural.)
As an example, I'm currently playing Rise of the Tomb Raider. For the sake of experimentation, I tried to play the game both with an Xbox 360 gamepad and the Steam Controller, and I really prefer the latter. Even with many years of experience with the former, aiming with a thumbstick is always so awkward and difficult, and the trackpad + gyro make it so much easier and fluent. Also turning around really fast is difficult with a thumbstick (because turning speed necessarily has an upper limit), while a trackpad has in essence no such limitation. You can turn pretty much as fast as you physically can (although the edge of the trackpad is the only limiting factor on how much you can turn in one sweep; however turning speed is pretty much unlimited.)
Third-person perspective games designed primarily to be played with a gamepad are one thing, but how about games played from first-person perspective? It really depends on the game. In my experience the Steam Controller can never reach the same level of fluency, ease and accuracy as the mouse, but with some games it can reach a high-enough degree that playing the game is very comfortable and natural. Portal 2 is a perfect example.
If I had to rate the controller on a scale from 0 to 10, where 10 represents keyboard+mouse, 0 represents something almost unplayable (eg. keyboard only), and 5 represents an Xbox 360 controller, I would put the Steam Controller around 8. Although as said, it depends on the game.
There are some games, even those primarily designed to be played with a gamepad, where the Steam Controller does not actually feel better than a traditional gamepad, but may even feel worse.
This is most often the case with games that do not support gamepad + mouse at the same time, and will only accept gamepad input only. In this case the right thumbstick needs to be emulated with the trackpad. And this seldom works fluently.
The pure "thumbstick emulation" mode just can't compete with an actual thumbstick, because it lacks that force feedback that the springlike mechanism of an actual thumbstick has. When you use a thumbstick, you get tactile feedback on which direction you are pressing, and you get physical feedback on how far you are pressing. The trackpad just lacks this feedback, which makes it less functional.
The Steam Controller also has a "mouse joystick" mode, in which you can emulate the thumbstick, but control it like it were a trackpad/mouse instead. In other words, in principle it works like it were an actual trackpad, using the same kind of movements. This works to an extent, but it's necessarily limited. One of the major reasons is what I mentioned earlier: With a real trackpad control there is no upper limit to your turning speed. However, since a thumbstick has by necessity an upper limit, this emulation mode has that as well. Therefore when you instinctively try to turn faster than a certain threshold, it just won't, so it feels unnatural and awkward, like it had an uncomfortable negative acceleration. Even if you crank the thumbstick sensitivity to maximum within the game, it never fully works. There's always that upper limit, destroying the illusion of the mouse emulation.
With some games it just feels more comfortable and fluent to use the traditional gamepad. Two examples of this are Dreamfall Chapters and Just Cause 2.
As for the slightly awkwardly positioned ABXY buttons, I always suspected that one gets used to them with practice, and I wasn't wrong. The more you use the controller, the less difficult it becomes to use those four buttons. I still wish they were more conveniently placed, but it's not that bad.
So what's my final verdict? Well, I like the controller, and I do not regret buying it. Yes, there are some games where the Xbox360-style controller feels and works better, but likewise there are many games where it's the other way around, and with those the Steam controller feels a lot more versatile and comfortable (especially in terms of aiming, which is usually significantly easier).
It turns out that the controller is a bit of a mixed bag. With some games it works and feels great, much better than a traditional (ie. Xbox 360 style) gamepad, in other games not so much. The original intent of the controller was to be a complete replacement of a traditional gamepad, and even the keyboard+mouse mode of control (although to be fair it was never claimed that it would be as good as keyboard+mouse, only that it would be good enough as a replacement, so that you could play while sitting on a couch, rather than having to sit at a desk). With some games it fulfills that role, with others not really.
When it works, it works really well, and I much prefer it over a traditional gamepad. Most usually this is the case with games that are primarily designed for gamepads, but support gamepad and mouse simultaneously (mouse for turning the camera, gamepad for everything else). In this kind of game, especially ones that require even a modicum of accurate aiming, the right trackpad feels so much better than a traditional thumbstick, especially when coupled with gyro aiming. (Obviously at first it takes a bit of getting used to, but once you do, it becomes really fluent and natural.)
As an example, I'm currently playing Rise of the Tomb Raider. For the sake of experimentation, I tried to play the game both with an Xbox 360 gamepad and the Steam Controller, and I really prefer the latter. Even with many years of experience with the former, aiming with a thumbstick is always so awkward and difficult, and the trackpad + gyro make it so much easier and fluent. Also turning around really fast is difficult with a thumbstick (because turning speed necessarily has an upper limit), while a trackpad has in essence no such limitation. You can turn pretty much as fast as you physically can (although the edge of the trackpad is the only limiting factor on how much you can turn in one sweep; however turning speed is pretty much unlimited.)
Third-person perspective games designed primarily to be played with a gamepad are one thing, but how about games played from first-person perspective? It really depends on the game. In my experience the Steam Controller can never reach the same level of fluency, ease and accuracy as the mouse, but with some games it can reach a high-enough degree that playing the game is very comfortable and natural. Portal 2 is a perfect example.
If I had to rate the controller on a scale from 0 to 10, where 10 represents keyboard+mouse, 0 represents something almost unplayable (eg. keyboard only), and 5 represents an Xbox 360 controller, I would put the Steam Controller around 8. Although as said, it depends on the game.
There are some games, even those primarily designed to be played with a gamepad, where the Steam Controller does not actually feel better than a traditional gamepad, but may even feel worse.
This is most often the case with games that do not support gamepad + mouse at the same time, and will only accept gamepad input only. In this case the right thumbstick needs to be emulated with the trackpad. And this seldom works fluently.
The pure "thumbstick emulation" mode just can't compete with an actual thumbstick, because it lacks that force feedback that the springlike mechanism of an actual thumbstick has. When you use a thumbstick, you get tactile feedback on which direction you are pressing, and you get physical feedback on how far you are pressing. The trackpad just lacks this feedback, which makes it less functional.
The Steam Controller also has a "mouse joystick" mode, in which you can emulate the thumbstick, but control it like it were a trackpad/mouse instead. In other words, in principle it works like it were an actual trackpad, using the same kind of movements. This works to an extent, but it's necessarily limited. One of the major reasons is what I mentioned earlier: With a real trackpad control there is no upper limit to your turning speed. However, since a thumbstick has by necessity an upper limit, this emulation mode has that as well. Therefore when you instinctively try to turn faster than a certain threshold, it just won't, so it feels unnatural and awkward, like it had an uncomfortable negative acceleration. Even if you crank the thumbstick sensitivity to maximum within the game, it never fully works. There's always that upper limit, destroying the illusion of the mouse emulation.
With some games it just feels more comfortable and fluent to use the traditional gamepad. Two examples of this are Dreamfall Chapters and Just Cause 2.
As for the slightly awkwardly positioned ABXY buttons, I always suspected that one gets used to them with practice, and I wasn't wrong. The more you use the controller, the less difficult it becomes to use those four buttons. I still wish they were more conveniently placed, but it's not that bad.
So what's my final verdict? Well, I like the controller, and I do not regret buying it. Yes, there are some games where the Xbox360-style controller feels and works better, but likewise there are many games where it's the other way around, and with those the Steam controller feels a lot more versatile and comfortable (especially in terms of aiming, which is usually significantly easier).
When video game critics and I disagree
Every year, literally thousands of new video games are published. Even if we discard completely sub-par amateur trash, we are still talking about several hundreds of video games every year that could potentially be very enjoyable to play. It is, of course, impossible to play them all, not to talk about it being really expensive. There are only so many games one has the physical time to play.
So how to choose which games to buy and play? This is where the job of video game critics comes into play. If a game gets widespread critical acclaim, there's a good chance that it will be really enjoyable. If it gets negative reviews, there's a good chance that the game is genuinely bad and unenjoyable.
A good chance. But only that.
And that's sometimes the problem. Sometimes I buy a game expecting it to be really great because it has received such universal acclaim, only to find out that it's so boring or so insufferable that I can't even finish it. Sometimes such games even make me stop playing them in record time. (As I have commented many times in previous blog posts, I hate leaving games unfinished. I often even grind through unenjoyable games just to get them finished, because I hate so much leaving them unfinished. A game has to be really, really bad for me to give up. It happens relatively rarely.)
As an example, Bastion is a critically acclaimed game, with very positive reviews both from critics and the general gaming public. I could play it for two hours before I had to stop. Another example is Shovel Knight. The same story repeats, but this time I could only play for 65 minutes. Especially the latter was so frustrating that I couldn't bother to play it. (And it's not a question of it being "retro", or 2D, or difficult. I like difficult 2D platformer games when they are well made. For example, I loved Ori and the Blind Forest, as well as Aquaria, Xeodrifter and Teslagrad.)
Sometimes it happens in the other direction. As an example, I just love the game Beyond: Two Souls for the PS4. When I started playing it, it was so engaging that I played about 8 hours in one sitting. I seldom do that. While the game mechanics are in some aspects a bit needlessly limited, that's only a very small problem in an otherwise excellent game.
Yet this game has received quite mixed reviews, with some reviewers being very critical of it. For example, quoting Wikipedia:
IGN gaming website criticised the game for offering a gaming experience too passive and unrewarding and a plot too muddy and unfocused. Joystiq criticised the game's lack of solid character interaction and its unbelievable, unintentionally silly plot. Destructoid criticised the game's thin character presentation and frequent narrative dead ends, as well as its lack of meaningful interactivity. Ben "Yahtzee" Croshaw of Zero Punctuation was heavily critical of the game, focusing on the overuse of quick time events, the underuse of the game's central stealth mechanics, and the inconsistent tone and atmosphere.And:
In November 2014, David Cage discussed the future of video games and referred to the generally negative reviews Beyond received from hardcore gamers.Needless to say, I completely disagree with those negative reviews. If I had made my purchase decision (or, in this case, the decision not to purchase) based on these reviews, I would have missed one of the best games I have ever played. And that would have been a real shame.
This is a real dilemma. How would I know if I would enjoy, or not enjoy, a certain game? I can mostly rely only on reviews, but sometimes I find out that I completely disagree with them. This both makes me buy games that I don't enjoy, and probably makes me miss games that I would enjoy a lot.
The genius of Doom and Quake
I have previously written about id Software, and wondered what happened to them. They used to be pretty much at the top of the PC game developers (or at the very least, part of the top elite). Their games were extremely influential in PC gaming, especially the first-person shooter genre, and they were pretty much the company that made the genre what it is today. While perhaps not necessarily the first ones to invent all the ideas, they definitely invented a lot of them, and did it right, and their games are definitely the primary source from which other games of the genre got their major game mechanics. Most action-oriented (and even many non-action oriented) first-person shooter games today use most of the same basic gameplay designs that Doom and especially Quake invented, or at least helped popularize.
But what made them so special and influential? Let me discuss a few of these things.
We have to actually start from id Software's earlier game, Wolfenstein 3D. The progression is not complete without mentioning it. This game was still quite primitive in terms of the first-person shooter genre, both with severe technical limitations, as well as gameplay limitations, as the genre was still finding out what works and what doesn't. One of the things that Wolfenstein 3D started to do, is to make the first-person perspective game a fast-paced one.
There had been quite many games played from the first-person perspective before Wolfenstein 3D, but the vast majority (if not all) of them were very slow-paced and awkward, and pretty much none of them had the player actually aim by moving the camera (with some possible exceptions). There were already some car and flight simulators and such, but they were not shooters really. Even the airplane shooter genre played from the first-person perspective were usually a bit awkward and sluggish, and they usually lacked that immersion, that real sense of seeing the world from the first-person perspective. (In most cases this was, of course, caused by the technical limitations of the hardware of the time.)
Wolfenstein might not have been the first game that started the idea of a fast-paced shooter from the first-person perspective, where you aim by moving the camera, but it certainly was one of the most influential ones. Although not even nearly as influential as id Software's first huge hit, Doom.
Doom was even more fast-paced, had more and tougher enemies, and was even grittier. (It of course helped that game engine technology had advanced by that point to allow a much grittier environment, with a bit more realism). And gameplay in Doom was fast! It didn't shy away from having the playable character (ie. in practice the "camera") run at a superhuman speed. And it was really a shooter. Tons of enemies (especially with the hardest difficulty), and fast-paced shooting action.
Initially Doom was still experimenting with its control scheme. It may be hard to imagine it today, but originally Doom was controlled with the cursor keys, with the left and right cursors actually turning the camera, rather than strafing. There was, in fact, no strafe buttons at all. There was a button which, when pressed, allowed you to strafe by pressing the left and right cursor keys (ie. a kind of mode switch button), but it was really awkward to use. By this point there was still no concept of the nowadays ubiquitous WASD key scheme (with the A and D keys being for strafing left and right).
In fact, there was no mouse support at all at first. Even later when they added it, it was mostly relegated to a curiosity for most players. As hard as it might be to believe, the concept of actually using the mouse to control the camera had yet not been invented for first-person shooters. It was still thought that you would just use the cursor keys to move forward and back, and turn the camera left and right (ie. so-called "tank controls").
Of course since it was not possible to turn the camera up or down, the need for a mouse to control the camera was less.
Strafing in first-person shooters is nowadays an essential core mechanic, but not back in the initial years of Doom.
Quake was not only a huge step forward in terms of technology, but also in terms of gameplay and the control scheme. However, once again, as incredible as it might sound, the original Quake still was looking for that perfect control scheme that we take so much for granted today. If you were to play the original Quake, the control scheme would still be very awkward. But it was already getting there. (For example, now the mouse could be used by default to turn the camera, but only sideways. You had to press a button to have free look... which nowadays doesn't make much sense.) It wouldn't be until Quake 2 that we get pretty much the modern control scheme (even though with the original version of the game the default controls still were a bit awkward, but mostly configurable to a modern standard setup.)
Quake could, perhaps, be considered the first modern first-person shooter (other than for its awkward control scheme; which was fixed in later iterations, mods and ports). It was extremely fast-paced, with tons of enemies, tons of shooting, tons of explosions and tons of action. It also pretty much established the keyboard+mouse control scheme as the standard scheme for the genre. Technically it of course looks antiquated (after all, the original Quake wasn't even hardware-accelerated; everything was rendered using the CPU, which in that time meant the Pentium. The original one. The 60-MHz one.) However, the gameplay is very reminiscent of a modern FPS game.
Doom and especially Quake definitely helped define an entire (and huge) genre of PC games (a genre that has been so successful, that it has even become a staple of game consoles, even though you don't have a keyboard and mouse there.) They definitely did many things right, and have their important place in the history of video games.
But what made them so special and influential? Let me discuss a few of these things.
We have to actually start from id Software's earlier game, Wolfenstein 3D. The progression is not complete without mentioning it. This game was still quite primitive in terms of the first-person shooter genre, both with severe technical limitations, as well as gameplay limitations, as the genre was still finding out what works and what doesn't. One of the things that Wolfenstein 3D started to do, is to make the first-person perspective game a fast-paced one.
There had been quite many games played from the first-person perspective before Wolfenstein 3D, but the vast majority (if not all) of them were very slow-paced and awkward, and pretty much none of them had the player actually aim by moving the camera (with some possible exceptions). There were already some car and flight simulators and such, but they were not shooters really. Even the airplane shooter genre played from the first-person perspective were usually a bit awkward and sluggish, and they usually lacked that immersion, that real sense of seeing the world from the first-person perspective. (In most cases this was, of course, caused by the technical limitations of the hardware of the time.)
Wolfenstein might not have been the first game that started the idea of a fast-paced shooter from the first-person perspective, where you aim by moving the camera, but it certainly was one of the most influential ones. Although not even nearly as influential as id Software's first huge hit, Doom.
Doom was even more fast-paced, had more and tougher enemies, and was even grittier. (It of course helped that game engine technology had advanced by that point to allow a much grittier environment, with a bit more realism). And gameplay in Doom was fast! It didn't shy away from having the playable character (ie. in practice the "camera") run at a superhuman speed. And it was really a shooter. Tons of enemies (especially with the hardest difficulty), and fast-paced shooting action.
Initially Doom was still experimenting with its control scheme. It may be hard to imagine it today, but originally Doom was controlled with the cursor keys, with the left and right cursors actually turning the camera, rather than strafing. There was, in fact, no strafe buttons at all. There was a button which, when pressed, allowed you to strafe by pressing the left and right cursor keys (ie. a kind of mode switch button), but it was really awkward to use. By this point there was still no concept of the nowadays ubiquitous WASD key scheme (with the A and D keys being for strafing left and right).
In fact, there was no mouse support at all at first. Even later when they added it, it was mostly relegated to a curiosity for most players. As hard as it might be to believe, the concept of actually using the mouse to control the camera had yet not been invented for first-person shooters. It was still thought that you would just use the cursor keys to move forward and back, and turn the camera left and right (ie. so-called "tank controls").
Of course since it was not possible to turn the camera up or down, the need for a mouse to control the camera was less.
Strafing in first-person shooters is nowadays an essential core mechanic, but not back in the initial years of Doom.
Quake was not only a huge step forward in terms of technology, but also in terms of gameplay and the control scheme. However, once again, as incredible as it might sound, the original Quake still was looking for that perfect control scheme that we take so much for granted today. If you were to play the original Quake, the control scheme would still be very awkward. But it was already getting there. (For example, now the mouse could be used by default to turn the camera, but only sideways. You had to press a button to have free look... which nowadays doesn't make much sense.) It wouldn't be until Quake 2 that we get pretty much the modern control scheme (even though with the original version of the game the default controls still were a bit awkward, but mostly configurable to a modern standard setup.)
Quake could, perhaps, be considered the first modern first-person shooter (other than for its awkward control scheme; which was fixed in later iterations, mods and ports). It was extremely fast-paced, with tons of enemies, tons of shooting, tons of explosions and tons of action. It also pretty much established the keyboard+mouse control scheme as the standard scheme for the genre. Technically it of course looks antiquated (after all, the original Quake wasn't even hardware-accelerated; everything was rendered using the CPU, which in that time meant the Pentium. The original one. The 60-MHz one.) However, the gameplay is very reminiscent of a modern FPS game.
Doom and especially Quake definitely helped define an entire (and huge) genre of PC games (a genre that has been so successful, that it has even become a staple of game consoles, even though you don't have a keyboard and mouse there.) They definitely did many things right, and have their important place in the history of video games.
Deceptive 5 stars rating systems
I noticed something funny, and illuminating, when browsing the Apple App Store. One game had this kind of rating:
In other words, 2.5 stars (which is even stated as text as well). This would indicate that the ratings are split pretty evenly. 50% approval rate.
However, the game had four 1-star ratings and two 5-star ratings. 1 star is the minimum rating, and 5 stars is the maximum.
Wait... That doesn't make any sense. That's not an even split. There are significantly more 1-star ratings than 5-star ratings (in fact, double the amount). That's not even close to an even split. Four people rated it at 1 star, and only two at 5 stars.
Is the calculation correct? Well, the weighted average is (4*1+2*5)/(4+2) = 2.333 ≈ 2.5 (we can allow rounding to the nearest half star.)
So the calculation is correct (allowing a small amount of rounding). It is indeed 2.5 stars. The graphic is correct.
But it still doesn't make any sense. How can 4x1 star + 2x5 stars give an even split? That's not possible. There are way more 1-star ratings than 5-star ratings. It can't be an even split! What's going on here?
The problem is that the graphic is misleading. The minimum vote is 1 star, not 0 stars. (If there were a possibility of 0-star ratings, then the graphic would actually be correct.)
The graphic becomes more intuitive if we remove the leftmost star:
Now it looks more intuitive. Now it looks like it better corresponds to the 4x1 - 2x5 split. In other words, a bit less than 50% rating.
Or to state it in another way: The problem is that the leftmost star is always "lit", regardless of what the actual ratings are, which gives a misleading and confusing impression.
In reality the rating system should be thought of as being in the 0-4 range (rather than 1-5), with only four stars, and the possibility of none of them being "lit". Then it becomes more intuitive and gives a better picture of how the ratings are split.
As it is, with a range of 1-5, with the leftmost star always "lit", it gives the false impression of the ratings being higher than they really are. I don't know if they do this deliberately, or if they just haven't thought of this.
In other words, 2.5 stars (which is even stated as text as well). This would indicate that the ratings are split pretty evenly. 50% approval rate.
However, the game had four 1-star ratings and two 5-star ratings. 1 star is the minimum rating, and 5 stars is the maximum.
Wait... That doesn't make any sense. That's not an even split. There are significantly more 1-star ratings than 5-star ratings (in fact, double the amount). That's not even close to an even split. Four people rated it at 1 star, and only two at 5 stars.
Is the calculation correct? Well, the weighted average is (4*1+2*5)/(4+2) = 2.333 ≈ 2.5 (we can allow rounding to the nearest half star.)
So the calculation is correct (allowing a small amount of rounding). It is indeed 2.5 stars. The graphic is correct.
But it still doesn't make any sense. How can 4x1 star + 2x5 stars give an even split? That's not possible. There are way more 1-star ratings than 5-star ratings. It can't be an even split! What's going on here?
The problem is that the graphic is misleading. The minimum vote is 1 star, not 0 stars. (If there were a possibility of 0-star ratings, then the graphic would actually be correct.)
The graphic becomes more intuitive if we remove the leftmost star:
Now it looks more intuitive. Now it looks like it better corresponds to the 4x1 - 2x5 split. In other words, a bit less than 50% rating.
Or to state it in another way: The problem is that the leftmost star is always "lit", regardless of what the actual ratings are, which gives a misleading and confusing impression.
In reality the rating system should be thought of as being in the 0-4 range (rather than 1-5), with only four stars, and the possibility of none of them being "lit". Then it becomes more intuitive and gives a better picture of how the ratings are split.
As it is, with a range of 1-5, with the leftmost star always "lit", it gives the false impression of the ratings being higher than they really are. I don't know if they do this deliberately, or if they just haven't thought of this.
The genius of Pokémon games
The Pokémon games celebrate their 20-year anniversary this year. There are currently six "generations" of these games; each generation consists one or two complementary pairs of games (each pair is essentially the same game, but with some catchable
pokémon being different, and some details in the storyline changed), and a third individual game in some cases. If we count each complementary pair of games as essentially the same game, there are 16 distinct games in total (27 if we count all games individually). This is, of course, only counting the core games, not the side games nor spinoffs (which usually are of a completely different genre and use completely different game mechanics.)
Curiously, each of the core games uses essentially the exact same basic game mechanic. From the very first games of generation 1 to the latest games of generation 6.
Describing the common aspects of the game mechanics of all the games would be too long, but to pick up the most essentials, it's basically a turn-based JRPG with a party of at most 6 pokémon, each with at most 4 moves, them being able to learn new moves as they level up, or using special items. Pokémon belong to different "classes" with a rock-paper-scissors system of strengths and weaknesses against other such "classes". Wild pokémon can be defeated for exp, or caught and added to the player's roster. The core story consists of the protagonist starting with one starter pokémon and advancing from city to city, challenging gym leaders, and ultimately reaching the Pokémon League where they will fight the "Elite Four", as the ultimate challenge and "soft end" of the game (although in most of the games the gameplay will continue after that with additional side quests and goals, and often even expanded world and new catchable pokémon.) There are a myriad of other staples and stock features that appear in all of the games, but I won't make this paragraph any longer by listing them.
Every single core game in the series, all 27 of them (so far), use the that exact same core game mechanic, and follow that same core plot. All of them. Their graphics have advanced with the hardware (even making a quite successful jump to 3D), and each new generation has additional features to them (lots of new pokémon, new battle modes, new side quests, etc.) but essentially all follow the same pattern. One could pretty much say that if you have played one of them, you have played all of them.
Yet, somehow, against all logic, each game is as enjoyable to play as ever. Somehow they avoid a sense of endless repetition, even though that would describe them quite well. I have a hard time explaining why. That's the genius behind these games.
For example, when I bought my Nintendo DS, one of the very first games I bought for it was Pokémon White. A year or so later (and mostly inspired by Twitch Plays Pokémon) I bought Pokémon White 2. Being in the same "generation", it's a very, very similar game (much more similar than games in subsequent generations). The story is different, but other than that it's really similar. But I still found it really enjoyable and addictive, and I played it for even longer than the first one.
These games, from the very first ones, are really well designed. You could just pick up the very first games of the first generation, in all of their monochrome graphical quality, and enjoy it as much as the latest ones. All the essential game mechanics are there.
Not many game series succeed in this. People often laugh at game companies essentially releasing the same game year after year (usually sports games), and how people still keep buying them even though it doesn't make sense... but in this particular case, it does make sense, because each game is as enjoyable as ever.
One thing I like about the pokémon games is how relaxing it is to play them. They avoid frustration and stress (although this can go so far as to often make the games too easy). They can be played at a leisurely pace, without rush nor pressure. Somehow the game mechanic just works.
Curiously, each of the core games uses essentially the exact same basic game mechanic. From the very first games of generation 1 to the latest games of generation 6.
Describing the common aspects of the game mechanics of all the games would be too long, but to pick up the most essentials, it's basically a turn-based JRPG with a party of at most 6 pokémon, each with at most 4 moves, them being able to learn new moves as they level up, or using special items. Pokémon belong to different "classes" with a rock-paper-scissors system of strengths and weaknesses against other such "classes". Wild pokémon can be defeated for exp, or caught and added to the player's roster. The core story consists of the protagonist starting with one starter pokémon and advancing from city to city, challenging gym leaders, and ultimately reaching the Pokémon League where they will fight the "Elite Four", as the ultimate challenge and "soft end" of the game (although in most of the games the gameplay will continue after that with additional side quests and goals, and often even expanded world and new catchable pokémon.) There are a myriad of other staples and stock features that appear in all of the games, but I won't make this paragraph any longer by listing them.
Every single core game in the series, all 27 of them (so far), use the that exact same core game mechanic, and follow that same core plot. All of them. Their graphics have advanced with the hardware (even making a quite successful jump to 3D), and each new generation has additional features to them (lots of new pokémon, new battle modes, new side quests, etc.) but essentially all follow the same pattern. One could pretty much say that if you have played one of them, you have played all of them.
Yet, somehow, against all logic, each game is as enjoyable to play as ever. Somehow they avoid a sense of endless repetition, even though that would describe them quite well. I have a hard time explaining why. That's the genius behind these games.
For example, when I bought my Nintendo DS, one of the very first games I bought for it was Pokémon White. A year or so later (and mostly inspired by Twitch Plays Pokémon) I bought Pokémon White 2. Being in the same "generation", it's a very, very similar game (much more similar than games in subsequent generations). The story is different, but other than that it's really similar. But I still found it really enjoyable and addictive, and I played it for even longer than the first one.
These games, from the very first ones, are really well designed. You could just pick up the very first games of the first generation, in all of their monochrome graphical quality, and enjoy it as much as the latest ones. All the essential game mechanics are there.
Not many game series succeed in this. People often laugh at game companies essentially releasing the same game year after year (usually sports games), and how people still keep buying them even though it doesn't make sense... but in this particular case, it does make sense, because each game is as enjoyable as ever.
One thing I like about the pokémon games is how relaxing it is to play them. They avoid frustration and stress (although this can go so far as to often make the games too easy). They can be played at a leisurely pace, without rush nor pressure. Somehow the game mechanic just works.
Why did the PlayStation Vita fail?
The PlayStation Vita has sold less than 10 million units worldwide. All in itself that might sound like a pretty decent amount, but it's dwarfed when we consider what the normal numbers for handheld consoles actually are: The PlayStation Portable sold about 82 million units, Vita's biggest competitor the Nintendo 3DS has sold 58 million units, and the original Nintendo DS a whopping 154 million units. In this light the Vita, which has sold a tenth of its competitors, and even its own predecessor, is arguably a failure.
And that's not all. The system is arguably also a failure, even a bigger one, in terms of its game library. The game library for the Vita is pitiably small. And we are talking about all games available for the system here. The triple-A game library for the system is significantly smaller still, to an almost ridiculous level. It's hard to sell a console that has no games for it. And this is a vicious circle because developers won't make games for a system that doesn't sell.
Why did it fail so catastrophically? Many people agree that this was caused mostly by two factors: Sony's greed (which is easy to believe) and, perhaps surprisingly and unintuitively, because the system is actually too powerful for its own good.
The first one is much easier to explain: Not only was the console quite expensive at launch, on top of that Sony got really greedy and crippled it with a proprietary memory card, which was over twice as expensive as the normal generic ones. What is worse, the Vita was shipped without a memory card (at least one that was large enough to download any games from the PlayStation Store), which meant that a separate memory card purchase was pretty much required to be able to use the system, which increased the actual price of the console even further. (This was, in fact, a rather dirty tactic from Sony. Not only was the launch price of the system quite high, on top of that it was actually artificially and deceptively lowered by not including a necessary component, which you had to then buy separately. In other words, there was a hidden cost, which wasn't very small either.)
Even the smallest of the proprietary memory cards costed like 20€, but it was so small as to be barely enough. If you really wanted one that you could actually use for actual games purchased digitally, you would easily end up paying 40 or 50€. On top of the original unit's price, of course. (At the same time, standard memory cards of the same capacity by other manufacturers would cost less than half of that. And there is no technical reason why they couldn't work on the Vita, except Sony's greed.)
The second reason may be harder to fathom at first, but let me explain.
The Vita contains some impressive hardware for a hand-held. It is, of course, hard to make comparisons, but it has approximately the same prowess as a PlayStation 3. On a handheld. And not only is the CPU and GPU so powerful, the screen is quite impressive in itself, being large and high-resolution (for a handheld at least).
But how can being "too powerful" be a detriment for a handheld console?
The reason is that making games for a powerful console is more expensive. And with the Vita, it's a gamble. A game studio may spend millions creating a Vita game, and see it sell only a fraction of what's necessary to cover the costs.
Nowadays most smartphones are about as powerful (if not even more so) and have even higher screen resolutions (even ridiculously so), yet they are quite successful. How come? Well, cellphones are not competing on the exclusive market of video gaming. A cellphone is not a dedicated gaming console. It's a smartphone; not just a phone, but essentially a portable mini-computer which you can use to do all kinds of things (such as browse the internet, message with people, and use all kinds of apps and games.)
The Vita, however, is just that: A gaming console. That's its principal purpose. Sure, you may be able to surf the internet with it, but nobody uses it for that purpose. It's not a smartphone.
And a gaming console needs a healthy library of games, or else it won't succeed.
The Vita has thus entered a vicious cycle from which it can't get out: It's too expensive and too risky to make big triple-A games for it, which means that its game library is very small, which means that people won't buy the console, which means that game studios won't make games for a console that doesn't sell... and so on.
It didn't help that Sony got greedy about it. Maybe if they hadn't been so greedy, it would have been a different story, even with its current hardware prowess and subsequently increased development costs. But they were, and this happened.
And that's not all. The system is arguably also a failure, even a bigger one, in terms of its game library. The game library for the Vita is pitiably small. And we are talking about all games available for the system here. The triple-A game library for the system is significantly smaller still, to an almost ridiculous level. It's hard to sell a console that has no games for it. And this is a vicious circle because developers won't make games for a system that doesn't sell.
Why did it fail so catastrophically? Many people agree that this was caused mostly by two factors: Sony's greed (which is easy to believe) and, perhaps surprisingly and unintuitively, because the system is actually too powerful for its own good.
The first one is much easier to explain: Not only was the console quite expensive at launch, on top of that Sony got really greedy and crippled it with a proprietary memory card, which was over twice as expensive as the normal generic ones. What is worse, the Vita was shipped without a memory card (at least one that was large enough to download any games from the PlayStation Store), which meant that a separate memory card purchase was pretty much required to be able to use the system, which increased the actual price of the console even further. (This was, in fact, a rather dirty tactic from Sony. Not only was the launch price of the system quite high, on top of that it was actually artificially and deceptively lowered by not including a necessary component, which you had to then buy separately. In other words, there was a hidden cost, which wasn't very small either.)
Even the smallest of the proprietary memory cards costed like 20€, but it was so small as to be barely enough. If you really wanted one that you could actually use for actual games purchased digitally, you would easily end up paying 40 or 50€. On top of the original unit's price, of course. (At the same time, standard memory cards of the same capacity by other manufacturers would cost less than half of that. And there is no technical reason why they couldn't work on the Vita, except Sony's greed.)
The second reason may be harder to fathom at first, but let me explain.
The Vita contains some impressive hardware for a hand-held. It is, of course, hard to make comparisons, but it has approximately the same prowess as a PlayStation 3. On a handheld. And not only is the CPU and GPU so powerful, the screen is quite impressive in itself, being large and high-resolution (for a handheld at least).
But how can being "too powerful" be a detriment for a handheld console?
The reason is that making games for a powerful console is more expensive. And with the Vita, it's a gamble. A game studio may spend millions creating a Vita game, and see it sell only a fraction of what's necessary to cover the costs.
Nowadays most smartphones are about as powerful (if not even more so) and have even higher screen resolutions (even ridiculously so), yet they are quite successful. How come? Well, cellphones are not competing on the exclusive market of video gaming. A cellphone is not a dedicated gaming console. It's a smartphone; not just a phone, but essentially a portable mini-computer which you can use to do all kinds of things (such as browse the internet, message with people, and use all kinds of apps and games.)
The Vita, however, is just that: A gaming console. That's its principal purpose. Sure, you may be able to surf the internet with it, but nobody uses it for that purpose. It's not a smartphone.
And a gaming console needs a healthy library of games, or else it won't succeed.
The Vita has thus entered a vicious cycle from which it can't get out: It's too expensive and too risky to make big triple-A games for it, which means that its game library is very small, which means that people won't buy the console, which means that game studios won't make games for a console that doesn't sell... and so on.
It didn't help that Sony got greedy about it. Maybe if they hadn't been so greedy, it would have been a different story, even with its current hardware prowess and subsequently increased development costs. But they were, and this happened.
"Mary Sue" characters
"Mary Sue" is an archetype of fiction (usually used unintentionally by the writer). The term is used mostly in a derogatory manner. It's, essentially, a character without flaws. A character that's just a bit too perfect, and seemingly can do no wrong, essentially makes no mistakes, and shows no weakness. Basically always a "lawful good" character that's nice to everybody.
Writers, even experienced ones, sometimes mistakenly make one of their major characters like this, perhaps in a misaimed attempt at making a likeable character that can be admired and rooted for. A hero of sorts (even if the character never does anything of great importance or performs literally heroic acts.) Sometimes the character is physically weak, but essentially a saint and philanthrope who loves everybody and is always kind and helpful. Sometimes the character is an actual action hero, an ace, who kicks villains' collective asses and always saves the day. A hero to be admired and adulated. Most usually they have no character flaws, and always act in the correct way depending on the situation.
The problem with these characters is that, somewhat ironically, they may end up feeling unlikeable. The complete opposite of what the writer intended. By being too perfect, too nice, and with literally no character or any other flaws, the character may end up unintentionally feeling distant, sappy, and unrelatable.
The Star Wars movies provide (at least) two prominent examples. In Episode I, the child Anakin is considered by most critics to be a perfect example of such a "Mary Sue" character, and he is almost universally, if not outright hated, at least disliked. In the new Episode 7 the character of Rey is also seen by many as a flawless "Mary Sue". She is not universally disliked, but the general feeling seems to be at the very least that of indifference. She doesn't make much of an impact, even though she's supposed to be one of the main characters.
Another example, perhaps one of the most infamous "Mary Sue" characters, appeared in the first seasons of Star Trek: The Next Generation. Namely, the character of Wesley Crusher, who has been almost universally deemed insufferable.
I think that the major problem with "Mary Sue" characters is that the viewers feel no empathy for them. Empathy is a big psychological aspect that makes fictional characters likeable or dislikeable. When a character has flaws, be it personality flaws or otherwise, if it's well written and well executed, the viewer may feel empathy for that character. (Although it's also very possible that certain flaws make the character dislikeable and even disgusting. This may be intentional, if well done, or unintended if poorly.)
When a character is too perfect and flawless, it doesn't trigger empathy. It may not trigger any strong emotions at all, which may leave the character uninteresting and give a feeling of indifference at best. At worst the character may end up being hated for being just a tad bit too obnoxious.
Another problem with these characters is that they tend to lack depth, making them flat and hollow, with no realistic personalities. Being flawless is not a personality trait. They don't feel much like actual real human beings.
That's not to say that a flawless character is never liked. Superman and Indiana Jones are probably examples (at least in their earlier incarnations.) It's just that it can be hard to pull off successfully. (In the case of Indiana Jones, the movies not taking themselves too seriously helps. In the case of Superman... well, I don't really know.)
Writers, even experienced ones, sometimes mistakenly make one of their major characters like this, perhaps in a misaimed attempt at making a likeable character that can be admired and rooted for. A hero of sorts (even if the character never does anything of great importance or performs literally heroic acts.) Sometimes the character is physically weak, but essentially a saint and philanthrope who loves everybody and is always kind and helpful. Sometimes the character is an actual action hero, an ace, who kicks villains' collective asses and always saves the day. A hero to be admired and adulated. Most usually they have no character flaws, and always act in the correct way depending on the situation.
The problem with these characters is that, somewhat ironically, they may end up feeling unlikeable. The complete opposite of what the writer intended. By being too perfect, too nice, and with literally no character or any other flaws, the character may end up unintentionally feeling distant, sappy, and unrelatable.
The Star Wars movies provide (at least) two prominent examples. In Episode I, the child Anakin is considered by most critics to be a perfect example of such a "Mary Sue" character, and he is almost universally, if not outright hated, at least disliked. In the new Episode 7 the character of Rey is also seen by many as a flawless "Mary Sue". She is not universally disliked, but the general feeling seems to be at the very least that of indifference. She doesn't make much of an impact, even though she's supposed to be one of the main characters.
Another example, perhaps one of the most infamous "Mary Sue" characters, appeared in the first seasons of Star Trek: The Next Generation. Namely, the character of Wesley Crusher, who has been almost universally deemed insufferable.
I think that the major problem with "Mary Sue" characters is that the viewers feel no empathy for them. Empathy is a big psychological aspect that makes fictional characters likeable or dislikeable. When a character has flaws, be it personality flaws or otherwise, if it's well written and well executed, the viewer may feel empathy for that character. (Although it's also very possible that certain flaws make the character dislikeable and even disgusting. This may be intentional, if well done, or unintended if poorly.)
When a character is too perfect and flawless, it doesn't trigger empathy. It may not trigger any strong emotions at all, which may leave the character uninteresting and give a feeling of indifference at best. At worst the character may end up being hated for being just a tad bit too obnoxious.
Another problem with these characters is that they tend to lack depth, making them flat and hollow, with no realistic personalities. Being flawless is not a personality trait. They don't feel much like actual real human beings.
That's not to say that a flawless character is never liked. Superman and Indiana Jones are probably examples (at least in their earlier incarnations.) It's just that it can be hard to pull off successfully. (In the case of Indiana Jones, the movies not taking themselves too seriously helps. In the case of Superman... well, I don't really know.)
The spoiler that nobody noticed (Terminator 2)
The 1984 movie The Terminator by James Cameron was a big hit in the 80's. While sci-fi movies with killer robots was nothing new (them going back probably at least to the 50's or even earlier), this was a movie that got really popular, and which transcended other sillier movies of the genre. It was gritty, it was thrilling, and it's an integral part of not only the 1980's cinematography but also culture.
Probably everybody knows the basic premise (and with this kind of movie it's probably rather useless to warn about spoilers): Nigh indestructible and unstoppable killer robot travels back in time from the future in order to kill the mother of the leader of the human resistance. The resistance also sends back a soldier, Kyle Reese, to protect the woman from the terminator. The film is stock full of grand scale chase sequences, and enormous amounts of violence and explosions (iow. it's one of the most iconic action thrillers of movie history.)
The killer robot, a "terminator", was played by Arnold Schwarzenegger, and it's by far one of his most iconic and memorable roles. If you don't know Arnold from any other role, you most probably know him at the very least from this one.
When the sequel came around in 1991, there was quite a lot of hype about it. Basically nobody could avoid hearing or reading about Arnold's new role in it: A good terminator. A terminator that now comes back in time to protect, rather than to kill.
And that was, in fact, a huge, huge spoiler... which basically nobody noticed. Not back then at least.
You see, Terminator 2 actually starts as if the same basic premise happens as in the first movie. It starts very similarly: It looks like an Arnold-looking terminator again travels from the future to the present and starts looking for John Connor, the future leader of the resistance. It also shows another man arriving. The movie sets it up to look like this man is another "Kyle Reese" sent this time to protect John Connor.
A good chunk of the beginning of the movie keeps their actual roles completely ambiguous. Someone who has seen the first movie, and has not been spoiled in any way, very easily gets the impression that the Arnold-looking terminator is the villain once again.
This ends up in a climactic scene where both the Arnold-looking terminator and the other guy find the young Connor at the same time, and it looks like the terminator is going to shoot him... And here is where the reveal, the twist, happens: It turns out that it was the other guy who was trying to kill him, and the Arnold-terminator is actually protecting him.
This is clearly intended to be a surprising twist... which ended up basically surprising nobody, because everybody had been spoiled. And curiously, nobody realized that it was a spoiler, and that this was supposed to be a surprise plot twist. (The reason for this is that the beginning is so ambiguous that it works the other way too. In other words, if you know in advance that Arnold-terminator is the "good guy", the beginning still makes perfect sense. Thus it doesn't feel odd and there isn't anything out of ordinary.)
It was a huge twist that basically everybody missed. Because everybody was spoiled by the hype and the advertising campaigns.
Probably everybody knows the basic premise (and with this kind of movie it's probably rather useless to warn about spoilers): Nigh indestructible and unstoppable killer robot travels back in time from the future in order to kill the mother of the leader of the human resistance. The resistance also sends back a soldier, Kyle Reese, to protect the woman from the terminator. The film is stock full of grand scale chase sequences, and enormous amounts of violence and explosions (iow. it's one of the most iconic action thrillers of movie history.)
The killer robot, a "terminator", was played by Arnold Schwarzenegger, and it's by far one of his most iconic and memorable roles. If you don't know Arnold from any other role, you most probably know him at the very least from this one.
When the sequel came around in 1991, there was quite a lot of hype about it. Basically nobody could avoid hearing or reading about Arnold's new role in it: A good terminator. A terminator that now comes back in time to protect, rather than to kill.
And that was, in fact, a huge, huge spoiler... which basically nobody noticed. Not back then at least.
You see, Terminator 2 actually starts as if the same basic premise happens as in the first movie. It starts very similarly: It looks like an Arnold-looking terminator again travels from the future to the present and starts looking for John Connor, the future leader of the resistance. It also shows another man arriving. The movie sets it up to look like this man is another "Kyle Reese" sent this time to protect John Connor.
A good chunk of the beginning of the movie keeps their actual roles completely ambiguous. Someone who has seen the first movie, and has not been spoiled in any way, very easily gets the impression that the Arnold-looking terminator is the villain once again.
This ends up in a climactic scene where both the Arnold-looking terminator and the other guy find the young Connor at the same time, and it looks like the terminator is going to shoot him... And here is where the reveal, the twist, happens: It turns out that it was the other guy who was trying to kill him, and the Arnold-terminator is actually protecting him.
This is clearly intended to be a surprising twist... which ended up basically surprising nobody, because everybody had been spoiled. And curiously, nobody realized that it was a spoiler, and that this was supposed to be a surprise plot twist. (The reason for this is that the beginning is so ambiguous that it works the other way too. In other words, if you know in advance that Arnold-terminator is the "good guy", the beginning still makes perfect sense. Thus it doesn't feel odd and there isn't anything out of ordinary.)
It was a huge twist that basically everybody missed. Because everybody was spoiled by the hype and the advertising campaigns.
Steam controller fist impressions
Bought a Steam Controller, and here are some first impressions.
As you might know, originally the Steam Controller was designed to be more like this prototype:
Direction controls were to be handled exclusively with the two touchpads, and the four standard ABXY buttons were to be arranged like in that prototype, close to the touchpads. At the center of the controller it was envisioned to be a touchscreen with freely configurable content; the same idea as with smartphones. The standard "start" and "select" buttons were probably going to be at the bottom, as in the above prototype.
The final retail version, however, is a bit different (probably having gone through dozens of different iterations, based on actual testing):
The three most prominent changes are the removal of the proposed touchscreen, the addition of a traditional thumbstick, and the rearrangement of the ABXY buttons in a more traditional pattern. (The touchpads are also larger, as is the controller overall.)
I always felt that this arrangement, however, is quite awkward. In the prototype the ABXY buttons are not in the traditional formation, but they are very close to the touchpads, making them easy to reach. In this arrangement, however, they are quite far from the right touchpad. Especially the most important button, ie. the A button, is relatively speaking really far. (Of course you can map the buttons however you like, but it still feels awkward.)
The idea of the touchpads being the main mode of control was kind of thrown out of the window and defeated by the addition of the thumbstick. (You can still, of course, configure the controller to have the left touchpad act as a thumbstick, but given that you have an actual thumbstick, probably nobody is going to do that.)
There is nothing inherently wrong in that (ie. having a thumbstick.) There is a problem in its positioning, though. The controller still seems to consider the touchpads to be the main modes of control, while the thumbstick is relegated to a secondary role, a bit awkardly positioned. (It's even more awkwardly positioned than in the PS4 controller, which has a similar positioning for the left thumbstick.)
Given that people (and most games by default) are going to use the thumbstick for movement anyway, why not put it in the natural position? Likewise I think that the ABXY buttons should have been put into their natural position, rather than awkwardly shoved where they are now.
In other words, I really think that the different controls should have been arranged like this:
This would have been, in fact, closer to the standard Xbox controller, with the left thumbstick on the upper left corner, and the "right thumbstick" (ie. the right touchpad in this case) in the lower-right "corner", with the ABXY buttons conveniently at the top-right of it.
As it is in the actual controller, it feels awkward to use the right touchpad and the ABXY buttons. As well as the left thumbstick.
As you might know, originally the Steam Controller was designed to be more like this prototype:
Direction controls were to be handled exclusively with the two touchpads, and the four standard ABXY buttons were to be arranged like in that prototype, close to the touchpads. At the center of the controller it was envisioned to be a touchscreen with freely configurable content; the same idea as with smartphones. The standard "start" and "select" buttons were probably going to be at the bottom, as in the above prototype.
The final retail version, however, is a bit different (probably having gone through dozens of different iterations, based on actual testing):
The three most prominent changes are the removal of the proposed touchscreen, the addition of a traditional thumbstick, and the rearrangement of the ABXY buttons in a more traditional pattern. (The touchpads are also larger, as is the controller overall.)
I always felt that this arrangement, however, is quite awkward. In the prototype the ABXY buttons are not in the traditional formation, but they are very close to the touchpads, making them easy to reach. In this arrangement, however, they are quite far from the right touchpad. Especially the most important button, ie. the A button, is relatively speaking really far. (Of course you can map the buttons however you like, but it still feels awkward.)
The idea of the touchpads being the main mode of control was kind of thrown out of the window and defeated by the addition of the thumbstick. (You can still, of course, configure the controller to have the left touchpad act as a thumbstick, but given that you have an actual thumbstick, probably nobody is going to do that.)
There is nothing inherently wrong in that (ie. having a thumbstick.) There is a problem in its positioning, though. The controller still seems to consider the touchpads to be the main modes of control, while the thumbstick is relegated to a secondary role, a bit awkardly positioned. (It's even more awkwardly positioned than in the PS4 controller, which has a similar positioning for the left thumbstick.)
Given that people (and most games by default) are going to use the thumbstick for movement anyway, why not put it in the natural position? Likewise I think that the ABXY buttons should have been put into their natural position, rather than awkwardly shoved where they are now.
In other words, I really think that the different controls should have been arranged like this:
This would have been, in fact, closer to the standard Xbox controller, with the left thumbstick on the upper left corner, and the "right thumbstick" (ie. the right touchpad in this case) in the lower-right "corner", with the ABXY buttons conveniently at the top-right of it.
As it is in the actual controller, it feels awkward to use the right touchpad and the ABXY buttons. As well as the left thumbstick.
Chess engines
I find chess engines, and how they have advanced in just the last ten years, really fascinating.
In 1997 the chess computer Deep Blue gained a lot of notoriety for being the first computer to beat a world-reigning chess master, Garry Kasparov, in a tournament of several games with long time controls. (More specifically, 6 games were played, and the score was 3½–2½ in favor of the computer.)
Since then, however, chess engines have advanced in major leaps.
Note that Deep Blue was a dedicated computer for the sole purpose of playing chess. It could calculate approximately 200 million nodes (ie. chess positions) per second.
Modern top chess engines will run on a regular PC, and their calculation speed on such a PC, even when using eg. 4 cores, is less than 10 million nodes per second, only a small fraction of what Deep Blue was capable of. Yet these modern chess engines, running on a regular PC, are significantly stronger than Deep Blue was, and would easily beat it (and even the strongest chess grandmasters.)
The reason for this is that their search tree pruning and position evaluation functions (and other more minor things) have vastly improved during the last 20 years. They are capable of finding the best moves with significantly less work than Deep Blue was. (It has been estimated that even Deep Blue back in 1997 would have got something like 200 ELO points stronger with some simple modern changes in its pruning functions, which would have made it significantly stronger than Kasparov.)
Modern chess engines do indeed find very easily moves that the best chess engines of just ten years ago would have struggled with (not to talk about engines of 1997). Consider, for example, this position from game 6 of the Kasparov vs. Deep Blue tournament:
Deep Blue is playing white, and Kasparov, as black, has just played h6. What is white's best response?
h6 is, in fact, a bad move. However, seemingly Kasparov played it to try to trick Deep Blue, because the proper response is so difficult that no chess engine of the day would play it. Deep Blue played the correct move, but only because it was in its opening book, not because it calculated it by itself. No engine of the time (or probably even for ten years since) would have found the correct response on their own.
The best move is Nxe6. This is a pure knight-for-pawn sacrifice because there is no immediate regaining of the lost material. It's done purely for positional advantage. It was extremely hard for chess engines to see this move as good, because at first it seems like a pure loss of material with no apparent advantage.
Yet, if you give this position to a modern top chess engine, like Stockfish 6, to analyze (with no opening books or anything), they will find Nxe6 in less than a second.
Or consider this position from the so-called "Kasparov Immortal" game (Kasparov vs. Topalov, 1999):
Black (Topalov) has just played Qd6. What is the best response by white?
The best response by white, and what Kasparov played, is the Rxd4 sacrifice.
I watched a YouTube video something like ten years ago about this game, and the person analyzing the game said that he gave this position to one of the top chess engines of the time (I don't remember which), and it took it over an hour to find Rxd4 as the best move.
If I give that position now to Stockfish to analyze, it finds Rxd4 as the best move in less than a second.
That's not to say that modern chess engines find the solution to all possible problems so fast, even today. For example, consider this very hard and contrived problem: White to play, mate in 6:
While Stockfish extremely quickly evaluates this as being overwhelmingly favorable to white, it nevertheless takes it between 4 and 5 minutes on my computer to find the mate in 6. (Note that it's not specifically mate-searching, just analyzing the position. It might find it faster if it were set to mate-searching.)
Chess engines have become so strong that they easily beat even the strongest human players, even when run on regular PCs rather than dedicated hardware.
In 1997 the chess computer Deep Blue gained a lot of notoriety for being the first computer to beat a world-reigning chess master, Garry Kasparov, in a tournament of several games with long time controls. (More specifically, 6 games were played, and the score was 3½–2½ in favor of the computer.)
Since then, however, chess engines have advanced in major leaps.
Note that Deep Blue was a dedicated computer for the sole purpose of playing chess. It could calculate approximately 200 million nodes (ie. chess positions) per second.
Modern top chess engines will run on a regular PC, and their calculation speed on such a PC, even when using eg. 4 cores, is less than 10 million nodes per second, only a small fraction of what Deep Blue was capable of. Yet these modern chess engines, running on a regular PC, are significantly stronger than Deep Blue was, and would easily beat it (and even the strongest chess grandmasters.)
The reason for this is that their search tree pruning and position evaluation functions (and other more minor things) have vastly improved during the last 20 years. They are capable of finding the best moves with significantly less work than Deep Blue was. (It has been estimated that even Deep Blue back in 1997 would have got something like 200 ELO points stronger with some simple modern changes in its pruning functions, which would have made it significantly stronger than Kasparov.)
Modern chess engines do indeed find very easily moves that the best chess engines of just ten years ago would have struggled with (not to talk about engines of 1997). Consider, for example, this position from game 6 of the Kasparov vs. Deep Blue tournament:
Deep Blue is playing white, and Kasparov, as black, has just played h6. What is white's best response?
h6 is, in fact, a bad move. However, seemingly Kasparov played it to try to trick Deep Blue, because the proper response is so difficult that no chess engine of the day would play it. Deep Blue played the correct move, but only because it was in its opening book, not because it calculated it by itself. No engine of the time (or probably even for ten years since) would have found the correct response on their own.
The best move is Nxe6. This is a pure knight-for-pawn sacrifice because there is no immediate regaining of the lost material. It's done purely for positional advantage. It was extremely hard for chess engines to see this move as good, because at first it seems like a pure loss of material with no apparent advantage.
Yet, if you give this position to a modern top chess engine, like Stockfish 6, to analyze (with no opening books or anything), they will find Nxe6 in less than a second.
Or consider this position from the so-called "Kasparov Immortal" game (Kasparov vs. Topalov, 1999):
Black (Topalov) has just played Qd6. What is the best response by white?
The best response by white, and what Kasparov played, is the Rxd4 sacrifice.
I watched a YouTube video something like ten years ago about this game, and the person analyzing the game said that he gave this position to one of the top chess engines of the time (I don't remember which), and it took it over an hour to find Rxd4 as the best move.
If I give that position now to Stockfish to analyze, it finds Rxd4 as the best move in less than a second.
That's not to say that modern chess engines find the solution to all possible problems so fast, even today. For example, consider this very hard and contrived problem: White to play, mate in 6:
While Stockfish extremely quickly evaluates this as being overwhelmingly favorable to white, it nevertheless takes it between 4 and 5 minutes on my computer to find the mate in 6. (Note that it's not specifically mate-searching, just analyzing the position. It might find it faster if it were set to mate-searching.)
Chess engines have become so strong that they easily beat even the strongest human players, even when run on regular PCs rather than dedicated hardware.
Visual simplification of user interfaces
There was a time, starting somewhere in the late 90's, and continuing for over a decade, of operating systems and programs using fancier and fancier looking graphics for GUI elements. This was true in all three major operating systems, and the majority of applications followed suit.
However, in later years, for some reason, there has been a trend into the opposite direction. Sometimes to the ridiculous extreme.
Consider, for example, the window decorations in Windows 7 vs. those of Windows 10:
The change into the opposite direction is just outright ridiculous. It goes so far as to be actually detrimental to usability. Of course every single fancy graphical effect is gone, and symbols have become nothing but one-pixel-wide straight lines, but that's not all.
In Windows 10 there is no difference in coloration between the title bar of the active window vs. an inactive window; it's always just pure white. (Applications are supposed to define their own colorations, which no application currently does, of course. Even then, it's just incomprehensible why sensible defaults can't be used.) Also, there is no border for the buttons (a trend that's absolutely detrimental, although it started well before Windows 10. However, now it has crept itself even into title bar buttons.)
Also notice that there is zero visual difference between menu titles (which are interactive elements) and the title of the window. They are all the same. You simply have to know that those are interactive menu buttons, as there is no actual visual distinction otherwise. (Also, in Windows 10 the "click&drag" areas around the borders, which you can use to resize the window, are invisible, outside the visible border of the window. Again, there is no visual indication of where the window could be resized with the mouse.)
The fact that window decorations are so utterly simplistic actually makes it sometimes very hard to use. When you have several windows open, one on top of another, all of them consisting of plain white background and one-pixel-wide borders, it becomes visually hard to distinguish between the windows. Sometimes it's even hard to see where the title bar of one window is. Needless to say, this wasn't a problem in Windows7.
Windows is not the only operating system embracing this trend. In the late 90's and the first decade of the 2000's, Mac OS X went into the direction of making fancier and fancier GUI designs, with all kinds of shiny graphical effects. Now they, too, have been going into the other direction. For example, just compare the upper left corner of Finder in older versions of Mac OS X vs. the current version:
Operating systems are not the only ones doing this. As a curious example, compare how the Google logo has changed over time. There has been a continuous trend towards simplicity there as well. Perhaps the most iconic version was the one used for longest, ie. the one used between 1999 and 2010. Then they simplified it, and simplified it, and simplified it... until the current version (from September 1, 2015) is just ugly. Not only are the shadows and lighting effects gone, but now even the font is a really ugly simplistic sans-serif that removes anything fancy even in the outline of the letters.
I really don't understand where this trend is coming from, and why everybody is embracing it. Sometimes you even see it in casual mobile games, with some games using amazingly simplistic graphics that look like they have been done in MS paint. (And, quite incredibly, a few of these games are some of the best-selling ones... which of course means thousands of copy-cats, who think that it's the simplistic graphics that sell the game.)
However, in later years, for some reason, there has been a trend into the opposite direction. Sometimes to the ridiculous extreme.
Consider, for example, the window decorations in Windows 7 vs. those of Windows 10:
The change into the opposite direction is just outright ridiculous. It goes so far as to be actually detrimental to usability. Of course every single fancy graphical effect is gone, and symbols have become nothing but one-pixel-wide straight lines, but that's not all.
In Windows 10 there is no difference in coloration between the title bar of the active window vs. an inactive window; it's always just pure white. (Applications are supposed to define their own colorations, which no application currently does, of course. Even then, it's just incomprehensible why sensible defaults can't be used.) Also, there is no border for the buttons (a trend that's absolutely detrimental, although it started well before Windows 10. However, now it has crept itself even into title bar buttons.)
Also notice that there is zero visual difference between menu titles (which are interactive elements) and the title of the window. They are all the same. You simply have to know that those are interactive menu buttons, as there is no actual visual distinction otherwise. (Also, in Windows 10 the "click&drag" areas around the borders, which you can use to resize the window, are invisible, outside the visible border of the window. Again, there is no visual indication of where the window could be resized with the mouse.)
The fact that window decorations are so utterly simplistic actually makes it sometimes very hard to use. When you have several windows open, one on top of another, all of them consisting of plain white background and one-pixel-wide borders, it becomes visually hard to distinguish between the windows. Sometimes it's even hard to see where the title bar of one window is. Needless to say, this wasn't a problem in Windows7.
Windows is not the only operating system embracing this trend. In the late 90's and the first decade of the 2000's, Mac OS X went into the direction of making fancier and fancier GUI designs, with all kinds of shiny graphical effects. Now they, too, have been going into the other direction. For example, just compare the upper left corner of Finder in older versions of Mac OS X vs. the current version:
Operating systems are not the only ones doing this. As a curious example, compare how the Google logo has changed over time. There has been a continuous trend towards simplicity there as well. Perhaps the most iconic version was the one used for longest, ie. the one used between 1999 and 2010. Then they simplified it, and simplified it, and simplified it... until the current version (from September 1, 2015) is just ugly. Not only are the shadows and lighting effects gone, but now even the font is a really ugly simplistic sans-serif that removes anything fancy even in the outline of the letters.
I really don't understand where this trend is coming from, and why everybody is embracing it. Sometimes you even see it in casual mobile games, with some games using amazingly simplistic graphics that look like they have been done in MS paint. (And, quite incredibly, a few of these games are some of the best-selling ones... which of course means thousands of copy-cats, who think that it's the simplistic graphics that sell the game.)
Hidden defects and talents
You know how some people don't realize, for example, being color-blind until well into adulthood, usually because of some kind of test or other circumstance where they suddenly become aware that their vision isn't actually normal? Yes, it does happen.
The same happens with just myopia as well. Surprisingly many people live years with a significant myopia without realizing it (and often only realize when they happen to take a vision test or similar.) It sounds unbelievable, but it does indeed happen. They have just got so used to it that they don't even realize something is not completely right about their vision. (Usually they then become painfully aware of it, especially if they have tried corrective glasses, as the difference is like day and night.)
Oftentimes this happens with hidden talents too. That in itself doesn't sound that surprising, except perhaps with certain kinds of talent. Like singing voice.
I myself didn't realize that I could actually sing somewhat decently until I was well into my mid-20's. This even though I had since childhood been in many situations of group-singing etc. Singing was always very uncomfortable to me, and it felt really forced, and I had a quite horrible singing voice. For that reason I hated singing and almost never participated.
So what was the problem? The problem was that I was trying to sing on the same octave as everybody else. And that was just not the natural octave for me.
At one point I had a friend who sung one octave lower than the normal average male singing voice. You know, that really deep and distinctive bass singing voice. For fun I tried that myself, and to my surprise I realized, for the first time in my life, that I could actually easily reach those notes, and it felt a hundred times more natural and easy. As incredible as it may sound, I had never previously tried to sing that low. I always imitated everybody else, without ever trying something different.
I'm not saying that I have a good singing voice, or a very loud one, but singing one octave lower than the normal male singing voice feels a lot more natural, and I'd say I can sing at least decently at those ranges. This range is a bit unusual because only a small minority of men can even reach those low notes at all.
It's a natural talent (or perhaps physical characteristic) that I didn't know I had until my mid-20's. It's curious, but it happens.
The same happens with just myopia as well. Surprisingly many people live years with a significant myopia without realizing it (and often only realize when they happen to take a vision test or similar.) It sounds unbelievable, but it does indeed happen. They have just got so used to it that they don't even realize something is not completely right about their vision. (Usually they then become painfully aware of it, especially if they have tried corrective glasses, as the difference is like day and night.)
Oftentimes this happens with hidden talents too. That in itself doesn't sound that surprising, except perhaps with certain kinds of talent. Like singing voice.
I myself didn't realize that I could actually sing somewhat decently until I was well into my mid-20's. This even though I had since childhood been in many situations of group-singing etc. Singing was always very uncomfortable to me, and it felt really forced, and I had a quite horrible singing voice. For that reason I hated singing and almost never participated.
So what was the problem? The problem was that I was trying to sing on the same octave as everybody else. And that was just not the natural octave for me.
At one point I had a friend who sung one octave lower than the normal average male singing voice. You know, that really deep and distinctive bass singing voice. For fun I tried that myself, and to my surprise I realized, for the first time in my life, that I could actually easily reach those notes, and it felt a hundred times more natural and easy. As incredible as it may sound, I had never previously tried to sing that low. I always imitated everybody else, without ever trying something different.
I'm not saying that I have a good singing voice, or a very loud one, but singing one octave lower than the normal male singing voice feels a lot more natural, and I'd say I can sing at least decently at those ranges. This range is a bit unusual because only a small minority of men can even reach those low notes at all.
It's a natural talent (or perhaps physical characteristic) that I didn't know I had until my mid-20's. It's curious, but it happens.
Steam Machines
Valve has in recent years launched a project that's effectively a "console-like" PC system. The entire system consists of a dedicated PC running SteamOS (which is a variant of Linux), a custom Steam controller (with very interesting technological innovations), and an optional streaming device. None of them is tied strictly to the others, so any component can be used on its own (if you eg. already own a gaming PC.) Especially the new Steam Controller is specifically designed to run on any PC (that can run Steam), and be extremely versatile and innovative. And from what people have reviewed, it looks extremely interesting.
There is one aspect of this project, however, that's badly marring it: The "Steam Machine". Or rather, the options available.
While the Steam Controller is funded and developed by Valve, and is fully their project (as far as I know), the "Steam Machines" are essentially just PC's by different manufacturers. Most of them may come in fancy boxes, but they are still just your regular old run-of-the-mill PCs.
And there lies the problem, albeit a bit indirectly: Because they are just normal PCs, made by PC manufacturers who build normal stock PCs as well, these "Steam Machines" are in no way cheaper than just a regular PC that you can buy anywhere (either pre-assembled, or in parts.) There is little incentive in buying a "Steam Machine" over a regular PC, because there is basically no price difference. In fact, many of these "Steam Machines" may actually be a bit more expensive than if you just bought all the exact same parts yourself or, sometimes, even if you bought an equivalent pre-assembled stock PC.
Moreover, by buying such a "Steam Machine" you are actually limiting yourself to a PC that has Steam OS (ie. a Linux variant) installed in it. It won't actually be able to play every single game available on Steam that's playable on Windows. So it is, essentially, a more limited PC that's no cheaper.
Note, however, that the "Steam Machine" project is still in development (as of writing this), and things may change in the near future. There may be some hiccups now with those machines and their pricing, but perhaps they will sort it out and optimize it in the future. (Although one could present the valid objection that they should be doing it right from the get-go.)
However, that's not the major problem with those machines (nor the main point of this post.) The major problem is the outright deceptive marketing that some of those "Steam Machine" manufacturers are using.
For example, consider this sales speech in one Steam Machine brand store page, about its graphics chip:
This is absolutely not so. The Intel graphics chips are integrated chips that are extremely modest in terms of efficiency, and intended mainly for very basic usage in laptops and non-gaming PCs.
In a current gaming PC, it's fair to assert that a GeForce GTX 680 (or any other card of similar efficiency) is pretty much the absolute minimum you ought to have in order to play modern video games with decent graphical quality. (The GTX 680 is getting older and older at a pretty fast rate, but it can still probably hold its own for a couple of years more, even for the newest games, as of writing this. Therefore if you are buying a new gaming PC, you should absolutely not be content with anything less.)
If we compare the Intel Iris Pro Graphics 5200 with a GeForce GTX 680 using a popular benchmarking tool, the former gets a rather pitiable result in comparison. You can see some results for example here.
In that benchmark, the GTX 680 got 5715 points, while the Intel Iris got 1191 points.
To better illustrate that difference, consider that a game that runs at about 60 frames per second on the GTX 680, will run at about 12 frames per second on the Intel Iris. (Of course the benchmark points do not necessarily translate exactly to equivalent framerates, but they probably are a good enough estimate.) 12 frames per second is beyond unacceptable. In practice you would have to lower the graphical quality level of the game to ridiculously low levels (and perhaps even then it would probably have problems running it smoothly.)
That machine, that gaming PC, is being sold, effectively, without a graphics card. It only has the default integrated graphics chip that comes with the CPU/motherboard (which nowadays comes by default with all CPUs/motherboards.) That chip may be able to run games made in 2005 at a decent speed, but it will most certainly not run games made in 2015.
That in itself is a disgrace for what's supposed to be a modern gaming platform. However, it's even worse than that because said machine is being marketed deceptively, giving the unaware buyer the impression that it's much more efficient than it really is.
Why is Valve allowing this to happen? They would have all the incentive in the world to stop both this kind of absurdly inefficient PCs being sold as "Steam Machines", and even more to stop that kind of deceptive marketing.
There is one aspect of this project, however, that's badly marring it: The "Steam Machine". Or rather, the options available.
While the Steam Controller is funded and developed by Valve, and is fully their project (as far as I know), the "Steam Machines" are essentially just PC's by different manufacturers. Most of them may come in fancy boxes, but they are still just your regular old run-of-the-mill PCs.
And there lies the problem, albeit a bit indirectly: Because they are just normal PCs, made by PC manufacturers who build normal stock PCs as well, these "Steam Machines" are in no way cheaper than just a regular PC that you can buy anywhere (either pre-assembled, or in parts.) There is little incentive in buying a "Steam Machine" over a regular PC, because there is basically no price difference. In fact, many of these "Steam Machines" may actually be a bit more expensive than if you just bought all the exact same parts yourself or, sometimes, even if you bought an equivalent pre-assembled stock PC.
Moreover, by buying such a "Steam Machine" you are actually limiting yourself to a PC that has Steam OS (ie. a Linux variant) installed in it. It won't actually be able to play every single game available on Steam that's playable on Windows. So it is, essentially, a more limited PC that's no cheaper.
Note, however, that the "Steam Machine" project is still in development (as of writing this), and things may change in the near future. There may be some hiccups now with those machines and their pricing, but perhaps they will sort it out and optimize it in the future. (Although one could present the valid objection that they should be doing it right from the get-go.)
However, that's not the major problem with those machines (nor the main point of this post.) The major problem is the outright deceptive marketing that some of those "Steam Machine" manufacturers are using.
For example, consider this sales speech in one Steam Machine brand store page, about its graphics chip:
"Intel® Iris™ Pro Graphics 5200This is absolutely deceptive marketing language. It deceives an unaware buyer into believing that the graphics chip (Intel Iris Pro 5200) is top-notch and highly efficient for gaming.
The BRIX Pro is one of the first devices to boast the cutting edge graphics capabilities of Intel® Iris™ Pro graphics. Based on the latest graphics architecture from Intel®, Iris™ Pro Graphics use an on-package 128M eDRAM cache that negates memory pipeline issues, greatly boosting overall performance in 3D applications."
This is absolutely not so. The Intel graphics chips are integrated chips that are extremely modest in terms of efficiency, and intended mainly for very basic usage in laptops and non-gaming PCs.
In a current gaming PC, it's fair to assert that a GeForce GTX 680 (or any other card of similar efficiency) is pretty much the absolute minimum you ought to have in order to play modern video games with decent graphical quality. (The GTX 680 is getting older and older at a pretty fast rate, but it can still probably hold its own for a couple of years more, even for the newest games, as of writing this. Therefore if you are buying a new gaming PC, you should absolutely not be content with anything less.)
If we compare the Intel Iris Pro Graphics 5200 with a GeForce GTX 680 using a popular benchmarking tool, the former gets a rather pitiable result in comparison. You can see some results for example here.
In that benchmark, the GTX 680 got 5715 points, while the Intel Iris got 1191 points.
To better illustrate that difference, consider that a game that runs at about 60 frames per second on the GTX 680, will run at about 12 frames per second on the Intel Iris. (Of course the benchmark points do not necessarily translate exactly to equivalent framerates, but they probably are a good enough estimate.) 12 frames per second is beyond unacceptable. In practice you would have to lower the graphical quality level of the game to ridiculously low levels (and perhaps even then it would probably have problems running it smoothly.)
That machine, that gaming PC, is being sold, effectively, without a graphics card. It only has the default integrated graphics chip that comes with the CPU/motherboard (which nowadays comes by default with all CPUs/motherboards.) That chip may be able to run games made in 2005 at a decent speed, but it will most certainly not run games made in 2015.
That in itself is a disgrace for what's supposed to be a modern gaming platform. However, it's even worse than that because said machine is being marketed deceptively, giving the unaware buyer the impression that it's much more efficient than it really is.
Why is Valve allowing this to happen? They would have all the incentive in the world to stop both this kind of absurdly inefficient PCs being sold as "Steam Machines", and even more to stop that kind of deceptive marketing.
Lazy and deceptive DLC
There are, roughly speaking, three types of DLC available for video games.
Firstly, the right kind of DLC. In other words, DLC that expands the original game with new, additional playable content, such as additional levels, for example in the form of a continuation or side story to the main story. Optimally, the original game is a full story all in itself (and can, thus, be fully enjoyed on its own, without any DLC), and the DLC just adds additional extra gameplay to it.
The Talos Principle is a perfect example of this. The main game is a complete full game. Later a DLC became available with a smaller side story containing its own additional puzzles. Bioshock Infinite is another example.
Secondly, there's lazy DLC. Basically all paid cosmetic DLC is this. Also all paid DLC that only adds minimal and mostly inconsequential elements to the main game (rather than entirely new content that's independent of it), such as eg. new weapons. There's usually no reason to pay money for any of this. You aren't getting anything of substance.
Then there's deceptive DLC. There are several forms of this.
One of them is when the main game isn't actually complete, and instead you have to buy the rest of it as "DLC". (This can, in the most egregious cases, get so bad that the whole game is actually in the game disc, or in the game data that you downloaded from Steam or other such online service. You just have to "unlock" those extra parts by paying even more money.)
Note that this is different from a game that's "episodic". Episodic games are ok as long as either
Games where you unlock more playable characters (eg. in a fighting game) fall somewhere between those two types of "DLC". They are unlockable (meaning that they already are in the game data from day one), but they are not essential to play the whole game through. Thus they are somewhat in between the lazy and the greedy cash-grabbing variety.
Firstly, the right kind of DLC. In other words, DLC that expands the original game with new, additional playable content, such as additional levels, for example in the form of a continuation or side story to the main story. Optimally, the original game is a full story all in itself (and can, thus, be fully enjoyed on its own, without any DLC), and the DLC just adds additional extra gameplay to it.
The Talos Principle is a perfect example of this. The main game is a complete full game. Later a DLC became available with a smaller side story containing its own additional puzzles. Bioshock Infinite is another example.
Secondly, there's lazy DLC. Basically all paid cosmetic DLC is this. Also all paid DLC that only adds minimal and mostly inconsequential elements to the main game (rather than entirely new content that's independent of it), such as eg. new weapons. There's usually no reason to pay money for any of this. You aren't getting anything of substance.
Then there's deceptive DLC. There are several forms of this.
One of them is when the main game isn't actually complete, and instead you have to buy the rest of it as "DLC". (This can, in the most egregious cases, get so bad that the whole game is actually in the game disc, or in the game data that you downloaded from Steam or other such online service. You just have to "unlock" those extra parts by paying even more money.)
Note that this is different from a game that's "episodic". Episodic games are ok as long as either
- the total price of all the episodes is that of one single game, or
- each episode is actually a "full game" on its own right, in terms of content and length (thus justifying a full price for each "episode". This actually makes it more a game series than a single "episodic" game.)
Games where you unlock more playable characters (eg. in a fighting game) fall somewhere between those two types of "DLC". They are unlockable (meaning that they already are in the game data from day one), but they are not essential to play the whole game through. Thus they are somewhat in between the lazy and the greedy cash-grabbing variety.
Portal headcanon
At the beginning of Portal 2, the computer voice first tells the player that "you have been in suspension for fifty days" (with the "fifty" being said with a completely different, automatic-sounding voice). Then later it says "you have been in suspension for nine... nine... nine... nine... nine..." etc.
Every single fan theory I have seen always speculates how long the time has been between the two events. They speculate that it has been 9999999 days, or hours, or whatever.
That makes absolutely no sense. It's quite clear that a) an algorithm is (in-universe) used to pronounce the number, and b) said algorithm was malfunctioning because of the state of decay of the facility, trying to pronounce some number starting with 9, but ending up in an endless loop with that first digit, like a broken record.
Otherwise it wouldn't make any sense why it first says properly "fifty" (rather than "five zero") but then "nine, nine, nine...". If it had been trying to say a large number, it would have said eg. "nine million, nine hundred ninety-nine thousand..." etc. (It also wouldn't make any sense for it to use days in the first announcement, and then eg. hours.) It's rather obvious that it simply got in a loop when trying to say the first digit of the actual number. Said number could just as well have been, for example, 900 days, or 9000 days. Further evidence is that the loop slows down by the end, ie. the computer was getting more and more corrupted by the second.
(Yes, I know that some background development material mentions tens of thousands of years, but as long as it doesn't appear in the game proper, I don't consider it canon. It's quite clear that the voice recording was intended to sound like a broken record, rather than just pronouncing a large number full of nines.)
It wouldn't make any kind of sense for the time to have been tens of thousands of years. No technology lasts for that long, and no living being, no matter how much "in suspension" they might be. And the bed, bedsheets and furniture would have turned to dust by that time. (And the potatoes of the "bring your daughter to work" day wouldn't be there after tens of thousands of years.)
The facility was overgrown with vegetation. The facility had to be so deteriorated that it was crumbling and allowed vegetation to invade it. I don't think a few years would be enough for that, which is why 900 days (about 2.4 years) might be too little. My personal choice of the actual time passed is 9000 days (plus whatever the rest of the number is, which we don't hear). That's about 25 years. That ought to be enough for the condition we see the facility in. (Although, admittedly, the potatoes contradict even this. But perhaps we can safely ignore the potatoes. Or perhaps they were preserved by irradiation or something.)
On a completely different tangent, I have a radical hypothesis: The universe we see in the Portal games is not actually the same universe as we see in the Half-Life games. They are completely separate universes.
There is a Black Mesa in the Portal universe, but it's a different one from what we see in the Half-Life games. Also there is an Aperture Science in the Half-Life universe, but it's not the same as the one we see in the Portal games.
Why do I think this? Well, consider that in Portal 2 it's established that both Aperture and Black Mesa competed over a governmental contract, and the government chose Black Mesa.
However, that makes no sense if this were a single universe. Aperture had working energy-efficient light-weight portable and weaponizable teleporting technology, and had had it for decades. Black Mesa, on the other hand, was barely at the beginnings of experimenting with teleportation technology, using hangar-sized machines which were highly experimental, and could barely form a teleport to some other universe, and which was so experimental and unreliable that it effectively caused the end of the world when it went out of control. And this was when the events of Half-Life happened (which is quite strongly implied to have happened well after this government contract was made, which means that Black Mesa's technology was even more primitive at that time, while Aperture already had had working teleportation technology for years, even decades.)
It would make more sense that the Black Mesa of the Portal universe (ie. the Black Mesa we have never seen) was even more advanced than Aperture Science was, which is why the government chose the former. Conversely in the Half-Life universe Aperture was less advanced than Black Mesa (and significantly less advanced than the Aperture we see in the Portal games.)
Note that the Perpetual Testing Initiative establishes that there are multiple parallel universes, many of them with their own versions of Aperture Science and even Cave Johnson (who in some of those universes even had a different assistant, a guy named Greg.) The Half-Life series is depicting the Black Mesa of one of those parallel universes.
Every single fan theory I have seen always speculates how long the time has been between the two events. They speculate that it has been 9999999 days, or hours, or whatever.
That makes absolutely no sense. It's quite clear that a) an algorithm is (in-universe) used to pronounce the number, and b) said algorithm was malfunctioning because of the state of decay of the facility, trying to pronounce some number starting with 9, but ending up in an endless loop with that first digit, like a broken record.
Otherwise it wouldn't make any sense why it first says properly "fifty" (rather than "five zero") but then "nine, nine, nine...". If it had been trying to say a large number, it would have said eg. "nine million, nine hundred ninety-nine thousand..." etc. (It also wouldn't make any sense for it to use days in the first announcement, and then eg. hours.) It's rather obvious that it simply got in a loop when trying to say the first digit of the actual number. Said number could just as well have been, for example, 900 days, or 9000 days. Further evidence is that the loop slows down by the end, ie. the computer was getting more and more corrupted by the second.
(Yes, I know that some background development material mentions tens of thousands of years, but as long as it doesn't appear in the game proper, I don't consider it canon. It's quite clear that the voice recording was intended to sound like a broken record, rather than just pronouncing a large number full of nines.)
It wouldn't make any kind of sense for the time to have been tens of thousands of years. No technology lasts for that long, and no living being, no matter how much "in suspension" they might be. And the bed, bedsheets and furniture would have turned to dust by that time. (And the potatoes of the "bring your daughter to work" day wouldn't be there after tens of thousands of years.)
The facility was overgrown with vegetation. The facility had to be so deteriorated that it was crumbling and allowed vegetation to invade it. I don't think a few years would be enough for that, which is why 900 days (about 2.4 years) might be too little. My personal choice of the actual time passed is 9000 days (plus whatever the rest of the number is, which we don't hear). That's about 25 years. That ought to be enough for the condition we see the facility in. (Although, admittedly, the potatoes contradict even this. But perhaps we can safely ignore the potatoes. Or perhaps they were preserved by irradiation or something.)
On a completely different tangent, I have a radical hypothesis: The universe we see in the Portal games is not actually the same universe as we see in the Half-Life games. They are completely separate universes.
There is a Black Mesa in the Portal universe, but it's a different one from what we see in the Half-Life games. Also there is an Aperture Science in the Half-Life universe, but it's not the same as the one we see in the Portal games.
Why do I think this? Well, consider that in Portal 2 it's established that both Aperture and Black Mesa competed over a governmental contract, and the government chose Black Mesa.
However, that makes no sense if this were a single universe. Aperture had working energy-efficient light-weight portable and weaponizable teleporting technology, and had had it for decades. Black Mesa, on the other hand, was barely at the beginnings of experimenting with teleportation technology, using hangar-sized machines which were highly experimental, and could barely form a teleport to some other universe, and which was so experimental and unreliable that it effectively caused the end of the world when it went out of control. And this was when the events of Half-Life happened (which is quite strongly implied to have happened well after this government contract was made, which means that Black Mesa's technology was even more primitive at that time, while Aperture already had had working teleportation technology for years, even decades.)
It would make more sense that the Black Mesa of the Portal universe (ie. the Black Mesa we have never seen) was even more advanced than Aperture Science was, which is why the government chose the former. Conversely in the Half-Life universe Aperture was less advanced than Black Mesa (and significantly less advanced than the Aperture we see in the Portal games.)
Note that the Perpetual Testing Initiative establishes that there are multiple parallel universes, many of them with their own versions of Aperture Science and even Cave Johnson (who in some of those universes even had a different assistant, a guy named Greg.) The Half-Life series is depicting the Black Mesa of one of those parallel universes.
Subscribe to:
Posts (Atom)