Monday, December 19, 2005

HUA + Utilitarianism = Optimalizationalism

In true philosophical form, I continue with my paper and create a huge word-mass that hopefully looks impressive and will probably require a lot of explanation (and probably still misses a number of key areas since the philosophical process is never really 'done'), but really boils down to what I mentioned in the previous section about utilitarianism being compatible with HUA.


As Raph Koster writes in his book, A Theory of Fun in Game Design, people are pattern-driven beings, and it shows in the way we approach our games: we strive to optimize our game play by learning the pattern of the game, and over time, a simple pattern like Tic-Tac-Toe becomes boring….similar to Wolfram’s Class II computation. A skilled player can force a cat’s game, or work their way around one, almost all the time. However, a more complex game like chess has exponentially more patterns, and optimal gameplay achieves two functions: increases the time that my king stays alive, and decreases the time that my opponent’s king stays alive. Although Koster expresses this idea in terms of gaming, this process of optimization is highly adaptable to HUA and utilitarianism….especially if we are interacting in a large-scale MMO game!

When united with HUA, utilitarianism takes on a different form, as mentioned earlier. Instead of simply acting towards the maximum overall benefit of individuals, expanding the consideration to include the preservation of benefactors transforms the theory by adding a principle of optimal calculation: calculations which produce the maximum benefit and least harm for the greatest number of individuals concerned must be themselves optimized – facilitating the benefactor so that it can continue to produce maximum social utility for as long as possible.

Utilitarianism is concerned with specific acts and their consequences. The form of optimalization that is being proposed here focuses on processes – not just one step in the proverbial cellular automata, but the rule itself, and the process by which it operates. Since the process of updating calculations involves update steps, this method of optimization is not really a replacement of utilitarianism, but a larger set, of which the principle of utility is an element and the tenets of utilitarianism play a large role in.

There are several considerations when acting in this fashion. First, it is important to try and assess the impact on as many levels as possible. Who will it affect? What will be affected? Will the effects for these things be positive or negative? To what degree? It is vital to act towards the greatest net benefit: to facilitate and optimize as many beneficial calculations that will create a positive overall impact, while terminating, deflecting, or otherwise redirecting harmful calculations that could have a detrimental impact.

Is it possible to fully know the results of a situation prior to acting? No. Rucker and Wolfram both agree that a sufficiently complex computation may be irreducible, and therefore unpredictable, so that the only way to truly find the result is to run the computation to that point, since there is no way to exponentially increase the speed of the calculation. Utilitarians (and optimizationalists) would be comfortable with this objection, because their goal isn’t necessarily to predict the future with 100% certainty prior to acting, but simply to act in a manner that is expected to produce the maximum net benefit.

The nature of unpredictable computations makes for a multitude of circumstances to affect a given decision, so optimizationalists wouldn’t necessarily focus on one specific cookie-cutter choice to govern right action, but instead would gauge their response based on the specific situation(s), with the right choice being the one that will produce maximum net benefit for that specific situation, not just for the moment, but for the long-term as well. Different situations might necessitate different responses – this doesn’t make the theory inconsistent, because we are responding to the unpredictable nature of the way that our computational reality is unfolding! Our moral response cannot be a Class II computation in itself and repeat itself, it must be a dynamic Class IV thought process that makes the effort to project what the best possible long-term outcome will be, even though it is not possible to do it with certainty.

Lastly, even though the calculation of which action may be the most optimal has to include the ‘big picture’, we must make sure to include ourselves equally into the decision as a factor if we are involved. It’s common to want to abstract ourselves out of the situation so as not to appear biased, but excluding oneself from the decision is an inverse form of the same bias. Every factor in the calculation needs to be included, otherwise it may skew the projected outcome and subsequent decisions based upon the projection.

Optimizationalism is an ethical system that is the natural product of the marriage of utilitarianism and universal automatism. Essentially, since HUA is centered around the idea of computation in everything, moral decisions made by anyone under this worldview would also need to be some kind of calculation. In order to live in harmony with our fellow calculations (PC and/or NPC), we need to act in a manner that yields the maximum social utility and facilitates the optimum circumstantial conditions that will maximize the beneficial state.


Post a Comment

<< Home