Monday, December 19, 2005

End of my paper

Here's the conclusion from my paper:

Hard universal automatism may seem a cold worldview to some, since its core premise claims that everything is a computation, but it does not have to mean that HUA believers are predisposed to act as coldhearted people, prone to solipsism and egoism. In fact, it is much the opposite: it is the interest in Class IV computations that drives them to see the beauty in the world and the people in it, and act to be facilitators and optimizers in the larger MMO picture in order to generate the maximum social and computational utility from our actions. We are blessed with the ability to interact and have an influence upon the world, and it is in the best interest of everyone that our actions serve as the input to create the best possible output state that we can produce.


I'm not sure I'm 100% happy with what I've written. It feels like there's more to be explained. Originally my paper had ballooned to the size of a small book, and I eventually trashed it all and started anew to focus it, but in retrospect, I wish I had made it longer and fleshed out my ideas more.

Well, I guess that's the benefit of having a blog. Even though I've submitted it for grading, the paper doesn't have to end, and I can keep on thinking here.

HUA + Utilitarianism = Optimalizationalism

In true philosophical form, I continue with my paper and create a huge word-mass that hopefully looks impressive and will probably require a lot of explanation (and probably still misses a number of key areas since the philosophical process is never really 'done'), but really boils down to what I mentioned in the previous section about utilitarianism being compatible with HUA.

Enjoy:

As Raph Koster writes in his book, A Theory of Fun in Game Design, people are pattern-driven beings, and it shows in the way we approach our games: we strive to optimize our game play by learning the pattern of the game, and over time, a simple pattern like Tic-Tac-Toe becomes boring….similar to Wolfram’s Class II computation. A skilled player can force a cat’s game, or work their way around one, almost all the time. However, a more complex game like chess has exponentially more patterns, and optimal gameplay achieves two functions: increases the time that my king stays alive, and decreases the time that my opponent’s king stays alive. Although Koster expresses this idea in terms of gaming, this process of optimization is highly adaptable to HUA and utilitarianism….especially if we are interacting in a large-scale MMO game!

When united with HUA, utilitarianism takes on a different form, as mentioned earlier. Instead of simply acting towards the maximum overall benefit of individuals, expanding the consideration to include the preservation of benefactors transforms the theory by adding a principle of optimal calculation: calculations which produce the maximum benefit and least harm for the greatest number of individuals concerned must be themselves optimized – facilitating the benefactor so that it can continue to produce maximum social utility for as long as possible.

Utilitarianism is concerned with specific acts and their consequences. The form of optimalization that is being proposed here focuses on processes – not just one step in the proverbial cellular automata, but the rule itself, and the process by which it operates. Since the process of updating calculations involves update steps, this method of optimization is not really a replacement of utilitarianism, but a larger set, of which the principle of utility is an element and the tenets of utilitarianism play a large role in.

There are several considerations when acting in this fashion. First, it is important to try and assess the impact on as many levels as possible. Who will it affect? What will be affected? Will the effects for these things be positive or negative? To what degree? It is vital to act towards the greatest net benefit: to facilitate and optimize as many beneficial calculations that will create a positive overall impact, while terminating, deflecting, or otherwise redirecting harmful calculations that could have a detrimental impact.

Is it possible to fully know the results of a situation prior to acting? No. Rucker and Wolfram both agree that a sufficiently complex computation may be irreducible, and therefore unpredictable, so that the only way to truly find the result is to run the computation to that point, since there is no way to exponentially increase the speed of the calculation. Utilitarians (and optimizationalists) would be comfortable with this objection, because their goal isn’t necessarily to predict the future with 100% certainty prior to acting, but simply to act in a manner that is expected to produce the maximum net benefit.

The nature of unpredictable computations makes for a multitude of circumstances to affect a given decision, so optimizationalists wouldn’t necessarily focus on one specific cookie-cutter choice to govern right action, but instead would gauge their response based on the specific situation(s), with the right choice being the one that will produce maximum net benefit for that specific situation, not just for the moment, but for the long-term as well. Different situations might necessitate different responses – this doesn’t make the theory inconsistent, because we are responding to the unpredictable nature of the way that our computational reality is unfolding! Our moral response cannot be a Class II computation in itself and repeat itself, it must be a dynamic Class IV thought process that makes the effort to project what the best possible long-term outcome will be, even though it is not possible to do it with certainty.

Lastly, even though the calculation of which action may be the most optimal has to include the ‘big picture’, we must make sure to include ourselves equally into the decision as a factor if we are involved. It’s common to want to abstract ourselves out of the situation so as not to appear biased, but excluding oneself from the decision is an inverse form of the same bias. Every factor in the calculation needs to be included, otherwise it may skew the projected outcome and subsequent decisions based upon the projection.

Optimizationalism is an ethical system that is the natural product of the marriage of utilitarianism and universal automatism. Essentially, since HUA is centered around the idea of computation in everything, moral decisions made by anyone under this worldview would also need to be some kind of calculation. In order to live in harmony with our fellow calculations (PC and/or NPC), we need to act in a manner that yields the maximum social utility and facilitates the optimum circumstantial conditions that will maximize the beneficial state.

Uniting Utilitarianism and Universal Automatism: Mill Meets Wolfram

Here's the next section of my term paper, in which I suggest that Mill's utilitarianism is compatible with universal automatism if we shift our focus from being centered on the individual to being centered on preserving beneficial Class IV computations.

Enjoy:

Towards the end of Rucker’s The Lifebox, The Seashell, and The Soul, he reveals, “The meaning of life is beauty and love.” I couldn’t agree more. Appreciating the beauty and richness of life and the fostering of compassion and lovingkindness are definitely central to infusing our lives with meaning and purpose.

However, being as people are, we may share similar goals yet differ in our opinions regarding the best options towards achieving those goals, at times because we tend to always want things to go our way and the other options don’t always benefit us. We’ve got to reach beyond ourselves a bit more. There’s no room for self-centeredness in love. It’s about everyone, and when we act in a manner that yields the maximum benefit and minimum harm to the greatest number of people, we produce the greatest good for all. This principle is central to the ethical theory of utilitarianism, and can be adapted to work for universal automatism as well.

Utilitarianism provides a moral calculus for resolving ethical dilemmas – another kind of calculation, predicated on the Greatest Happiness Principle: actions are right to the degree by which it promotes happiness, and wrong to the reverse. Mill’s theory of utilitarianism holds that attaining pleasure and being free of pain are the only two things which are desirable as ends, and that all of our motivations are just permutations of this basic principle. (Mill)

However, from the worldview of HUA, and everything being a calculation, how can this best be expressed in a manner which aligns with HUA? What place does happiness have as a calculation, and how is unhappiness to be expressed as a calculation? Are pleasure and pain really the two pillars upon which to rest our moral decisions upon?

I believe that HUA broadens the scope of utilitarianism beyond the benefit of PCs and NPCs, but towards the benefit of the calculations which comprise them, plus the ones which act upon them. Isolating our decisions strictly to individuals fails here because there are calculations in operation which act upon us, that themselves are not individuals. Take nature, for instance. Damaging the natural world carries repercussions that certainly affect individuals, but rather than focusing solely upon individuals, fostering the larger calculation benefits all.

What do I mean by fostering calculations? Well, Wolfram has four classes of computation, which are described in Rucker’s Lifebox text. Class I computations terminate or have a fixed point. Class II computations go into an infinite recursion of sorts. Class III computations are totally random, and Class IV computations appear to have purposeful randomness. Based upon this, the phenomenon of our lives appears to have many Class IV qualities about it.

However, it’s not necessarily the case that there is only one Grand Ultimate Calculation, but infinitely many calculations which comprise everything. Additionally, it’s not always the case that every observed computation will stay that way. A Class I computation can burst into life when stimulated by an outside input. In similar fashion, a Class II computation can be thrown out of its loop, a Class III computation can be stirred into something purposeful, and a Class IV computation can oscillate between states. All of them can end, though…and it’s this consideration which must be made: the perpetuation of beneficial calculations is key, essentially acting to preserve that which is beneficial. Seems like word play, but there is a difference between the benefit and the benefactor.

Sunday, December 18, 2005

The PC/NPC Dilemma: A Refutation of Solipsism & Egoism

Here's another portion of my term paper, written in a somewhat casual style, in the spirit of my blog. Probably not the best approach for an academic paper, but hopefully it's somewhat easier on the eyes for the reader. Some philosophy is insanely difficult to read.

Anyway, read on:

For all of its merits, philosophy cannot provide a concrete proof that ‘you’ exist. This ‘you’ that I refer to is better stated as the ‘other’, everyone that is ‘not-me’. Empirical evidence on my behalf might reveal observations that there are others who seem to look and behave similarly to myself on various levels, but because I cannot ever fully experience the ‘other’, I can never have a full proof that anyone else, aside from myself, exists. This position in philosophy is referred to as solipsism.

This is rather problematic from an ethical standpoint, even for HUA. If I am the only one that exists, then everyone else is just a non-player character, or NPC, in this MMO universe: computations of sufficient sophistication to interact with me to some degree, but not necessarily as sophisticated as myself. It would seem that solipsism could be a byproduct of adopting HUA.

For that matter, if everything is a computation, then it’s equally possible that what I perceive to be myself is just a highly-sophisticated computation that is infinitely more gnarly (to borrow Rucker’s term) than typical NPCs, but could just be another NPC myself! But no, I find myself in the same place as Descartes: cogito ergo sum. It’s highly unlikely that I’m just someone else’s NPC, well, unless there is some uber-being in the role of PC (player character) that is playing in this MUD.

Well, so at least I exist. However, another problem still remains: if I am the only one who exists, then why should I act kindly towards others at all? Why not act purely out of self-interest since everyone else is just a NPC? This doctrine of pure self-interest is the position of the egoist, and it would seem likely that if HUA leads to solipsism, then solipsism leads to egoism.

The egoist, in terms of MMOs, essentially becomes a sort of ‘grief player’, described by Chek Foo as “a player who derives his/her enjoyment not from playing the game, but from performing actions that detract from the enjoyment of the game by other players.” (Foo) The only stipulation in this case would be that it wouldn’t necessarily be griefing if I was the only player and there were no other players.

However, this still leads to problems. Acting purely out of self-interest, assuming that everyone else is a NPC, leads to the ways of the sociopath. In going this route, I could justify killing anyone I didn’t like, raping any woman I found attractive, and taking anything I desired…all because none of those other people exist, and the only duty I have as an egoist is to myself.

Is this the way for a HUA believer to live? Possibly. Some terrorists, cults, and serial killers/rapists/thieves might buy into this and justify their actions in such a manner…perhaps not to this degree, but in a similar fashion.

However, even if the HUA-solipsist-egoist were correct, it wouldn’t be practical to act in such a fashion because the theory also includes that the other NPCs are highly sophisticated enough to emulate my own behavior to the point where I might consider them other people. That sophistication also includes the ability to fight back and/or seek recourse for harmful actions which I might engage in…the result of which could potentially be very bad for myself. Therefore it is in my best interest to act as if all of the NPCs around me were real people, and probably to consider them as individuals like myself – resulting in the abandonment of solipsism and egoism (at least overtly).


More coming soon.

Saturday, December 17, 2005

Course Paper Preview

The following is part of the introduction to my term paper for the course, titled Ethical Considerations of Living as a Hard Universal Automatist:

In adopting any worldview, we integrate a number of fundamental assumptions about the world into the way in which we perceive the ‘big picture’ and our place within it. Our worldview becomes the focus of our belief system – our ethical philosophy, if you will, which is reflected in our behaviors, desires, and motivations. The degree of moderation to which we cling to our worldview also bears a factor in our lives, distinguishing the casual from the serious (and the believer from the fanatic in some cases), creating a part of the gray area in which our society collectively determines what is typically acceptable and what is generally intolerable.

The worldview of universal automatism, as espoused by Stephen Wolfram, holds that “it is possible to view every process that occurs in nature or elsewhere as a computation.” (Rucker) Wolfram’s words are carefully chosen; however, if we were to take his definition of universal automatism and move it from the sphere of possibility into the sphere of actuality, we get: “every process that occurs in nature or elsewhere is a computation.” This shift in emphasis would essentially draw a line between soft and hard universal automatism, in which a hard universal automatist might possibly consider themselves and the entire universe to be a sort of infinitely large-scale MUD (or MMO) which is in operation.

Adopting the hard view of universal automatism as described above would entail a shift in the way that we perceive ethical behavior. This paper will focus on some of the ethical aspects of this MUD-styled version of universal automatism (occasionally using an informal tone and MUD terminology for purposes of analogy), paying attention to the considerations which might affect the way in which we view our lives, and suggest a modification of Mill’s utilitarian ethics centered around the concept of optimization which might best fit the worldview of hard universal automatism, or HUA.


More to come shortly.