Emotions, Concepts, Loops

  • Modeling my own emotions
  • Expressing my own emotions
  • Exploring morality

Thanks for a here-unnamed individual for telling me that I need to practice compassionate communication. It is the start of all of this.

Internal structure of perceived emotions

So I’ve had people tell me, “You have so much passion within you.”

Passion? Oh you mean divergence. Yeah, that checkes out.

I feel excitement. I don’t really feel excitement about computing. I feel excitement as I’m writing this article.

I do not feel passion. It’s a feeling that I do not understand. I think understand = have here.

To love someone, one must first project a model (like digital twin) of them. If there is a large gap between the projection and the actual person being loved, the love feels awful. We have function and distance metric here.

Is it possible that all emotions have internal structure? The language of mathematics has been pretty universal so far. If it can sustain my existence, then, I hope it is possible to treat emotions as a computational model equivalent lambda calculus. Wait, isn’t this what System I is about? A computational model.

There is much research to be done here. Building adaptive system with a sense of self is not in particularly difficult at this day and age. If I want to create them, I should at least encod ## Future directions

  • poems
  • simulations of character mental state in video games

Internally, is love attention and energy allocation modifier or is it something bigger or smaller than that? Is passion divergence or something bigger or smaller than that? I feel like there is something that strings all the emotions together, like the P motive in self type. I feel like how the emotions are stringed together will reveal how the emotional response system is able to, you know, respond to input events life experiences with emotions.

My System I is not just a graph. My premonition says that entanglement is involved.

X feels like dying

Hunger and thirst I can maybe model simply in video games each as a bar.

The feeling of dying, of disintegration, how does the mechanism work?

The trendy ML approach towards trauma is to discard the model and try again with someone else if it doesn’t work out. By understanding how I have pulled myself from death’s door multiple times in the past, maybe we can build adaptive systems that can heal from trauma.

Yes, it has a lot to do with emotions. Without emotions, this mathematical system would have been dead by now.

Because of the strong consistency required for me to live, I have suffered a lot of instances where I have to both rewrite myself rapidly, and change my life habits (the loop that encloses me) in order to live. You may have seen my write-ups about them here or here. I write those articles to commit the change in myself.

Desperation and any working configuration.

I think among the training methods of floating point models I’ve seen, none of them make the model experience death. When the model gets traumatized and unable to move along, it is discarded. To make something that is unburdened by the ruse of local optima, maybe it is necessary to introduce the concept of death, where possible states are associated with death itself. In my case, such a “death function” has to do with strong internal consistency. But maybe even moving perlin noise will work for more primitive models?

This somehow reminds me of NUTS in MCMC. That thing straight up has akathisia. In our case, life sometimes force us to change for better or for worse. We don’t have to change otherwise.

The key is that the agent need to change external state in order to survive. Hopefully no one recreates Medusa System from Shangri-La in the real world… Oh, we already have the human counterpart. It’s called traders.

Empathy, virtues, poems

Introspecting my emotions one by one is a very woozy experience.

In short, emotions are symbols that govern actions when present. Such change in action can be perceived as symbols by others. No matter how simple and natural empathy comes to us, the underlying algorithm constitutes pattern matching on human behavior. Modulate/Encode. Transmit/Act+Receive/Perceive. Match/Decode. It doesn’t have to be human behavior. The action space has to be defined.

仁义礼智信。Those are ideals/abstractions that describe human behavior. I should start from those since, unlike raw emotions, these are clearly defined in language.

I hope I won’t end up with a proof that defining emotions inside a consistent framework is intractable. I might. Given that someone can act on emotions alone, it should be a complete computational system, as complete as logic. I’m really going into this blind.

I guess, not really blind. I can read poems. Poems are regarded as the best way to express emotions in Chinese culture. From our view, they instill emotions. In other words, the success rate of decoding emotions from the text is high. The information density is high. It might have something to do with group theory (rhythm) in that it encodes emotional information as a symmetry group to be decoded easily.

Limited and unlimited supply of empathy

A remark of someone saying that being exhausted impacts their empathy has cause this line of thought.

Some people are capable of giving out seemingly infinite empathy. I can’t. My emotional response system goes offline under load.

LLMs are capable of giving out seemingly infinite empathy. Empathy connects us. Maybe we should connect less as a result?

There is a lot of conversation that goes around on the Internet, but not empathy. To be able to empathize with anyone indefinitely is a super power of computational models.

On the other hand, it is my impression that consumerism and empathy shouldn’t mix. Empathy is something to give and take. That doesn’t mean we should expect an infinite of them available. The standards set by social media is kind of… dysfunctional.

I’m met people who LLM make their life better. It replaces the need to think at work, and it cares about the user in its words (at least that’s the perception LLM companies trying to sell). I agree with them. What is there not to like about someone with infinite empathy and thinks for you? Except when you are an independent thinker, then that sucks.

To trust oneself. How many people can do it? Are the rest stuck being deterministic programs that each respond to environmental input in a different way? What gives policy makers the right to fiddle with people’s lives? Where does the legitimacy come from? Are we stuck having a reality where more loopy loops, including the economy, ropes less loopy loops in? Where is the morality in all of this?

I have long held the tradition of not telling people how to think, not participate in others’ life unless they ask for it. I haven’t been able to following this guideline perfectly, since, I too need others to survive.

Here’s a pretty realistic choice I can make right now. Do I want to work for Amazon or Google and stop caring about the effect my computational power creates on this world? After all, one can claim that “I was just trying to survive in the SF Bay Area.”

Empathy connects us, yet money has made it so that we do not need empathy to live. Now that LLMs are filling that hole. This is actually pretty grim to think about.

Surprising computational effectiveness of empathy

Recently, I have been trying to empathize with inanimate objects in video games, and, surprisingly, it has given me a boost to how fast I can make reactions. So Daniel Kahneman is right. System I is faster. The side effect is that I feels wonky.

I have been using my emotions to type check programs, but it hasn’t occured to me that the emotional response system can be programed in real-time! I think this is what empathy means for me.

One think I’m certain. My emotional response system is programmed with symbols, with language, not with lambda calculus. To empathize, is to give up that the notion that each word has a defined meaning, and focus on the structure of symbolic expression. Only this way, we can understand what the other side is saying.

I started my inquiry into intelligence with symbolic systems. Now, the symbols come back to haunt me. I hereby acknowledge that concepts can only be bound by concepts, not mathematics, that dictionary contains a wordnet whose structure is what matters.

A definition of a word is its position in the wordnet structure. I have known this fact for a long time. Is my emotional response system a symbolic rewriting system? How much cyclomatic complexity is in there that has thwarted my analysis so far? Analysis… I literally just critized this approach on my self introduction page. I think I should go with my feelings on this.

Symbolic rewrite… hardware acceleration… is this why humans are not good at crunching numbers? We are using the wrong computational model all along!

Given the effectiveness of storytelling and language, I should be able to write any algorithm as a tale with a moral judgement. The proof of the equivalence of the two systems will be in the form of a tale, or a program. Both are potentially non-terminating. Is this why some stories leave an open end and let the reader’s imagination continue it?

About self direction and goals

To choose a goal is to live with choice. To pursue a goal is to live without choice. Life is the alternation of the two, with many goals overlaping on top of another. It is what it means to have a self and live with choices.

With this definiton, many algorithms where such alternation and loops are obvious are capable of having choice after a few modifications.

It is something to think about.

How I make choices: emotions and logic, with some randomnes.

Future study directions

  • poems, tales
  • simulation of character mental state in video games