I have been thinking on altruism, and how it plays out within organizations. In particular, I’m thinking about the question which sometimes arises when considering altruism with regards to whether altruism can be selfless or is simply a form of selfishness wherein the individual thinks well of themself for giving of themself and thus actually is being selfish by being giving. Put differently: people question altruism’s emotional component as if enjoying the altruistic act is somehow immoral because of that enjoyment of it.
The reason the question is at all interesting would seem to be because we place moral weight upon selfish versus selfless feelings, and want to frame the discussion in those terms because those terms have moral weight, rather than it being an abstract intellectual interest in the understanding of the nature of the altruistic impulse. It is also narrow in thinking because it disallows both to be true: it is possible to both experience pleasure at thinking well of yourself and to be giving to others from a genuine desire to help them. It would seem that the thinking in this area should be disentangled and should have the moral component removed in order to adequately understand the concept of altruism; likewise, they should be disentangled in order to thoroughly examine the field with regards to its hidden moral component in the form of the questions it asks.
A functioning system of community encourages its individuals to be somewhat altruistic towards one another. This is part of being an individual in a community, at its root. In being raised with in such a community, and encouraged to develop this behavior, one is encouraged to associate that behavior with pleasure of some sort (the behavior is rewarded). So, each member of the community is trained to experience pleasure at performing altruistic acts. It would seem, then, that even the question of whether altruism is selfish or selfless is rather absurd, as the value component of altruism is imposed from an outside system, encouraged by that outside system, and deliberately inflicted upon the psychological makeup of its individuals in order to cause the community to thrive and survive. Considering altruism as an individual concept, thus, ignores entirely the fact that it is both on artificial construct, and is outwardly induced rather than necessarily being intrinsic to the human psychological makeup.
In organizations, when we speak about emotional intelligence, I believe that what we are really speaking about is a set of emotional traits including some degree of fellow-feeling, which may be interpreted to mean altruism. If you consider that fellow feeling necessitates acting positively towards someone else, then fellow-feeling must be viewed as altruism. So, in organizations, just as in other communities, the culture must function to encourage altruistic behavior. In systems and cultures which lack this feature of encouraging altruism, the system or organization is not sustainable over the long term, simply because the cohesiveness and resiliency of the group must surely be determined to some extent by the number and quality of altruistic members. That is not to argue that an organization consisting solely of selfish individuals could not exist, but that such an organization (I imagine) would seem utterly foreign and would not necessarily even be navigable to someone with a functioning set of altruistic impulses.
A problem with accepting the discussion of altruism as having both an intrinsic moral component and an intrinsic motivational component is that the moral component is imposed from outside (Puritans, maybe?) and has driven the discussion in a perverse direction while the motivational component is confounded by both category errors (biology / psychology) and weakness in causal attribution and that there is a confusion with regards to the biology involved. In particular, when one is asking questions about the emotional or sensory aspects of moral judgments and their necessity with regards to how they are attached to moral judgments, one ignores that, firstly, the reward system has been conditioned by the intentional instruction of the community. Secondly, an individual which did not possess such conditioning would be an individual likely unrecognizable as human; at least, within the range of what the majority of society would consider normal, we would not recognize such an individual well enough to predict their behavior as we would other human minds.
I have been thinking about altruism alongside the biological and AI concept of Emergence*, wherein unintelligent components of the system are able to evince what appears to be quite rational and logical thought through the application of simple algorithms. When we look at something like Emergence in the animal kingdom and say, “This is something that is able to make decisions without any conscious thought,” and we clearly see that in nature, why would we assume that human thought is of any different nature? Why would we assume that human thought is of greater complexity than that which is carried out seemingly without consciousness in the animal kingdom? If we fail to consider the biological systems, we are asking questions which do not move the discussion in any meaningful direction. We need to understand ourselves both in terms of biology and in terms of psychology in order to sufficiently understand the motivational components of meta-ethics. If we attempt to explicate these questions without reaching out to adjacent fields as well, we end up where we are now, failing to honestly consider all of the aspects of the problem.
To me, though, the larger problem with disentangling the biological and psychological is that of causation: does the altruistic act cause the feeling of pleasure? We would think so, because we believe that we act based upon motivations and we believe that we understand that motivation, despite experimental evidence indicating that the mind simply makes up stories that seem plausible for explaining the world.
In order to disentangle the biology, though, we need to have a discrete chain of causation in which the biological component doesn’t actually begin the process, else the altruistic act suffers from the same lack of standing as it would have done had it been out of a selfish impulse. This isn’t possible, however, because of the way the simple system functions in that it is often quite impossible to disentangle the direction of causation between feelings and biology. For example, feelings of nausea may be caused by a stomach ailment or may be the result of psychological distress; when both conditions are present, and even considering what the individual would offer as explanatory, the reality is that we are trusting an interpretive system to make sense of systems to which the individual doesn’t pay adequate attention and of which they may not have adequate understanding. Additionally, we seem to be arguing from insufficient evidence when considering individual entities and how those entities feel, and then attempting to generalize from a population making statements about how they feel, when all of those entities are undergoing the same requirement to interpret their own biological systems and provide explanation for what they are experiencing, simultaneously we are trusting that those entities are able to express themselves in coherent manners and have the same access to language with which to express a particular set of emotions or range of emotions and thoughts and feelings.
The problem becomes infinitely more complex when one considers that these actions and emotions are usually considered as happening as discrete events, clearly strung together, when we know that there are feedback systems involved in the biological system which are themselves sending signals and request that actions be taken, all of which signals come together to formulate a gestalt decision made by the biological collective rather than being a decision made by an individual in isolation. And, yes: I did just hint that somewhere there’s an argument that the individual cells of your brain are analogous to a colony of termites. You’re welcome.
Anyway, this is what’s been rambling through my mind today, distilled down from the ideas which sleeted through the universe, landed in my head, and which I spat out into the phone (thank you TTS!) for later. I imagine there are some problems in there with the logic. Please point them out, I’m happy to discuss.
*Emergence has its own epistemic and ontological problems, of course; it would seem that these should be broadened from the narrow field of artificial intelligence into the larger field of systems thinking and also applied to systems and organizations.