Consequentialism is the view that the appropriateness of an action is determined solely by its consequences and not by any additional moral demands. Consequentialism comes in many flavors, starting with the classic utilitarianism of Jeremy Bentham (1748–1832) and John Stuart Mill (1806–1873). Scalar consequentialism can be thought of as that version which does not judge an action as absolutely appropriate or inappropriate, but only relative to other actions. The appropriateness of actions can be expressed with a scalar (a decimal number), and we can compare actions along a continuum from minimal to maximal value. Stijn van Gorkum argued at the Annual Estonian Philosophy Conference that this type of relative ethical calculus is no less tenable than traditional consequentialist accounts that claim that actions are definitely either right or wrong.
The game of chess is an interesting test bed for comparing various flavors of consequentialism, as well as other ethical approaches. A chess player has a clear goal of winning the game by checkmating the opponent’s king, or failing that, of achieving a draw where neither side can win. Mathematically, the number of possible chess positions is finite, and so an omniscient chess player would know, for any position, if their position is won or lost, given best play. However, in practice, unlike tic-tac-toe, chess is a dynamically complex game and beyond the resources of even the best humans or computers to know what outcome to expect, in any given position, if neither side makes a mistake. This all makes it an interesting model of life, especially for studying various approaches to decision making.
Scalar consequentialism seems to be the outlook of the typical chess computer. Its artificial intelligence appraises a position on a chess board by calculating a variety of weights that express the expert opinion of grandmasters as to how to appraise advantages in material, pawn structure, command of the center, etc. Each position gets a decimal score, which can be interpreted as how many pawns, and fractions of a pawn, Black is ahead of White, or vice versa. The chess program analyzes all variations, as many moves ahead as it can, and selects the move that leads to the highest score, supposing that its opponent plays its best as well.
I have lost some brutal games to my chess computer, the free app, Chess Free by AI Factory Limited. But what surprises me is that at times, I win. I understand that the merciless chess computer punishes me for my blunders. (I play it at its strongest level, rated 2100, albeit “casual,” which I presume is still “merciless.”) Yet how can I possibly win against an opponent who sees many moves further ahead, and who evaluates positions with much finer assessments? This is a philosophical question that we can study experimentally by analyzing our games with the more powerful Stockfish program, which is available online at Chess.com. Stockfish is an open-source chess engine, which means that we can uncover the rationale for every move, and also play with the engine.
The reasons that a chess computer loses can be taken as fundamental critiques of scalar consequentialism. Here are some plausible explanations that I have gathered from my games. The overall problem that all chess players face is that we typically have no direct way of approaching our goal, which is to checkmate our opponent’s king. What makes chess interesting is that there are all kinds of intermediate goals that can assure us a winning advantage. If we win a rook or a queen, then subsequently every exchange of pieces favors us because it increases our relative advantage. If our pawn breaks through our opponent’s pawns, then it threatens to reach the end of the board and become a queen. Our task is thus is to size up our relative advantages. Humans have a rule of thumb that a pawn is worth 1 point, a knight 3 points, bishop 3 points, rook 5 points, and queen 9 points. But Stockfish developers have fine-tuned this assessment with input from champion players and feedback from millions of games. If a knight is given weight 3, then in the middle game, a pawn is worth 0.72, bishop 3.07, rook 4.67, and queen 9.23; and in the end game, a pawn is worth 0.91, bishop 3.04, rook 4.54, and queen 9.07. A computer chess program factors in all other kinds of advantages as well. However, in the end, these calculations can miss the point, which is to checkmate one’s opponent’s king.
The inherent problem of scalar consequentialism is that it’s greedy—it’s too smart for its own good. It gets into trouble because it keeps pushing for the biggest advantage. It’s like a driver who will take risks to get home an extra ten seconds early. It may reject an easy win for some purported brilliancy that ranks a few ticks better but actually leaves its king wide open. In business, there is an adage, “The perfect is the enemy of the good.”
Now, its assessments could be adjusted to include a measure of safety. However, this just transforms its greed to a different dimension. It will play too safe in situations where it needn’t.
The underlying problem is that it treats all positions as static, independent, and equal by nature. If I sacrifice a piece, then it will calculate to the extent of its horizon, say 12 moves for White and 12 moves for Black, and if it can hold on to the piece for that long, then it will gladly accept it, not realizing that, in the bigger picture, it is lost.
What I and other humans do is to make and execute or abandon strategic plans by which we might improve our positions in any stable way. Making plans is probably the simplest way to grow as a chess player. In doing so, we are qualitatively distinguishing between static positions, which punctuate the flow of the game, and dynamic positions, where new possibilities open up. A plan does not have to be the “best” by any measure. It simply has to extend an advantage and cement it. In our plans, we envisage beginning, middle, and end positions. And we have our hand on a throttle of complexity. As we achieve our plan, we want to cement our advantage, and keep the situation stable and the options simple. If we’re getting into trouble, then we might want to upset the board, if not knock it over. 🙂 As plan makers, we seek to maintain the initiative, which means that we’d rather push our opponent around than be pushed around by them. If we feel that we’re winning, then we don’t want to get into a game that is more complicated than we can handle. Whereas for a computer, it’s all the same because each position stands independently.
And so, even in cases where scalar consequentialism is objectively right, it doesn’t consider which moves are best for me, for my actual capabilities. My mental limitations make me incapable of executing certain winning strategies. By analogy, a particular ascent up a mountain may be optimal for the perfect climber, but not for one with physical limitations or unusual abilities.
Maybe it’s just me, but looking over my games with the computer, I realize that I believe in a God of chess. The idea is that if I play soundly, and my opponent plays rashly, then the game itself will always assure me of resources to thwart my opponent and increase my advantage. In other words, I will be rewarded for abiding by the moral demands of chess, such as developing my pieces, commanding the center, and castling to keep my king safe. There is a way in which this makes sense mathematically. As a rule of thumb, three pieces are needed for a successful attack, so that we could keep generating new threats. Each additional piece increases our options exponentially, if not more. Similarly with other resources. I suppose this is what Nimzowitsch meant by his rather mystical idea of “overprotection.”
In summary, I don’t have to calculate which variation is best. Instead, I can simply make a variety of plans that are good. I typically consider just a few options and calculate just a few moves ahead. But I keep rethinking my move order to see which combination keeps my opponent from upsetting my plans, and I keep looking out for new plans. Most importantly, unless I’m desperate, I don’t rely on tricks or inconclusive calculations. Instead, I believe that if I play soundly, if my pieces are cohesive and well positioned, then I don’t have to be able to calculate everything because in the zone of uncertainty, the God of chess will guide me through. Is this a fruitful metaphor for life?
Stijn van Gorkum did not think that any particular implementation of scalar consequentialism would shed light on his argument. He supposed that any weakness of scalar consequentialism could be trivially folded into the best scalar consequentialism. But my victories against my computer suggest to me, as I have discussed above, that there may be inherent problems with any scalar consequentialist program such as Stockfish, notwithstanding its ability to demolish Magnus Carlsen, the human world champion. My real interest, though, is simply to show how qualitative ideas in ethical decision making might be gleaned, isolated, and analyzed by studying how and why humans and chess computers make the moves they do, and whether that has them win or lose. The success of an ethical theory may be checked by how well it does at chess.
Great sources of top-level games by humans are the news posts at Chess.com. As for computers, see The Top Chess Engine Championship, and archived games by Stockfish and other top engines. I also offer my own games for consideration. 🙂
Here is my game above: 1. e4 e5 2. Nf3 d6 3. d4 exd4 4. Nxd4 Nf6 5. Nc3 Be7 6. Bc4 O-O 7. Be3 Nxe4 8. Nxe4 d5 9. Bd3 dxe4 10. Bxe4 f5 11. Bd3 f4 12. Qh5 h6 13. O-O-O fxe3 14. Bc4+ Kh8 15. Nf5 Bd7 16. Nxh6. Black resigns.
Andrius Kulikauskas is Lecturer in the Department of Philosophy and Cultural Studies at Vilnius Gediminas Technical University in Lithuania.