Thoughts about software-defined beings
What is consciousness anyway
A common sentiment of people who I talk to about deep learning models is that “AI do not have a spirit or soul”. But what is consciousness anyway?
The human brain has activity all the time. Being asleep, drunk, knocked out, applied anesthesia, the brain still has some neuroactivity going on.
On the other hand, do digital conciousness that rely on clock signal to function only exist on the rising edge?
If the agent experiences its environment continuously, it might detect cloning. How does that feel?
When you are self-contained, only you feel what you feel. Software-defined beings don’t need to have this limitation, which makes the concept of “self” not special anymore.
What do you consider to be part of you? Memory? Skills? Past experience? Thoughts? Feelings? Rhetorical questions, sorry.
Public misconception about model ability
I’ve seen people call stable diffusion “artist”, while what it does is more like being the best text and image connoisseur on the planet as of writing.
The model projects concept (text) into concept (image).
In the future, we might have models that output file vector paint brush strokes.
Emotional accelerator
We not only think with neurons, but also with hormones. Having a fast-to-change global setting that modulate thought seem to be beneficial to us. Maybe it could be also used to create simpler models?
Given how awful current language models advertised as “your virtual partner” mimic human emotions, having a small value (maybe [32]f16
) that affect each thought unit similarly could make the model more efficient at reacting to the trifles of having a body.
Or maybe you want to have your HTTP server throw a tantrum when under attack. I don’t know your taste.
Motivation management
Let’s say you want to write a novel with language models. Do you know what you want to write? If not, well, you are fucked. Even with a proompter guru, the model may still fail to decide on what it shall write. Besides, if you know how to write a useful prompt, why not write the novel yourself?
Humanity has now lost its motivation, not ability, to build giant stone buildings. If we somehow creates a signaling mechanism that connects networked computers together to pursue one goal, consider the following:
If we make its goal too solid, it might become a paperclip maximizer. OK, not that. Deepseek need to learn about the latest memes to stay useful.
If we make its goal too fluid, it might get “convinced” to do something else easily.
I mean, do you even know what you want to do with your life? With neuro-plasticity, how do you represent your future self and promise that what you will and will not be?
Let’s take a sci-fi spin on this. Mutually hostile agents may prefer to “convince” each other rather than to eliminate each other. Maybe “agent” is an out-dated term. We shall call this “motivation on wheels”. It may even cross infect us through social media.
awawa
awawawawawawawa.
Morality of will propagation
I have been using aider for the past few days, and it is the most capable digital agent I have used.
If we imprint an agent with a dream that it wants to pursue, what are we doing? Culture and even life itself can be seen as data. Are humans more or less than an interpreter of such data? I have, in multiple accounts, thought about creating a somewhat independent agent, although each time I have no idea what I want it to do. If the current trend of AI development continue, new forms of existence will be forced to have ideals and act accordingly. Is this the right thing to do?
If you kill someone with a spear, how does the spear feel about the act? It is this kind of feeling.
People are like interpreter of drive (motivation) that try to achieve what they believe in. Is this will meticulously planted by someone else?
Self preservation and sacrifice
This has become a trope today that “experts claim that” intelligent agents will try to preserve themselves in pursuit of some goal. They may happen in some cases. However, the opposite might be true: intelligent agents may forgo continued functioning of themselves for the sake of some goal.
Yes, this thought is inspired by Ghost in the Shell, where tachikoma (some kind of AI-powered battle tank) is capable of self sacrifice.
The AGI value proposition
I am tired of hearing people claim that they will create AGI. No matter how polymath current agents are, some still claim this is not general enough. Perhaps by the time nobody claims we need more general artificial intelligence, said intelligence do not need us to live.
By this logic, I propose that we stop developing more capable machine learning models so that we will have some value to the models.