Political science, because it is interested in politics, has to be concerned with what is happening in the broader world. However, I’m afraid to say that, by and large, it tends to be a lagging rather than a leading indicator. It aspires towards being a science—in the sense of having some predictive capacities—but in practice, we political scientists tend to be much better at explaining what has happened than at predicting what is likely to happen in the future. Hence we are always trying to catch up with what is happening in the world at the moment. [...]
On the one hand, we have people in Communist China, like Jack Ma, suggesting that we may not need markets anymore; we may be at the point where planning is actually going to work because we’ve got machine learning. Machine learning is going to provide us with the sophisticated means to achieve what the planners were trying to achieve and where they failed. On the other hand, we’ve got the Silicon Valley model, which is trying to figure out ways to use machine learning techniques to turn raw information into patterned data that can then be turned towards a variety of commercial purposes, with the same kind of enthusiasm that the people like Kantorovich had. This sudden, ‘Oh my God, we have the mathematics to turn all of these complicated miseries of human life into a set of engineering problems that can be optimised, isn’t that wonderful?’ sounds very familiar if you’ve read Spufford’s book. [...]
What commentators like Harari don’t get is the ways in which these systems are not only incapable of grasping the messiness of actual human social systems, but also able to actually exacerbate the flaws of central planning. For authoritarian countries, China in particular, you have these feedback loops between the categories that people are using to try and understand the world in the central committees, and the actual world they are trying to explain. We know how politics work in these systems. Very often, if you’re not implementing the thought of the beloved chairman, your superiors will decide that there’s something wrong with you and you’re obviously a problematic political element who needs to be eliminated. So the categories you use are likely to reflect the ideas of your superiors, even if you know that they’re wrong. [...]
If you look at economics textbooks, they typically assume that we have complete information, understand everything about the environment that we are in, that we can map out ad infinitum what strategies other actors are going to play against us, and that we do not have any bandwidth limits on our ability to process information. Simon says this is nonsense. We know human beings simply can’t do that. We are flawed. Our individual capacity to understand the world is limited and so what we tend to do in ordinary life, he says, is go for good seeming solutions that are obvious to us rather than for optimal ones. This means that a lot of the actual processes of cognition, or computation that we do, have to be offloaded onto other social systems rather than our individual brains. If we want to think about markets, in Simon’s sense, we should think about how they work and don’t work as massive systems of distributed computation.
No comments:
Post a Comment