Distortion Wizard

Limits

An alternate point of view of humanity

Here's yet again something I've been contemplating recently. And by recently I mean five years, maybe eight.

Let the blog vomit commence.

I've come to realize that every single motivation related to being human is due to our limited nature. Every single thing that catches the eye, every single thing we get up for in the morning or later in the day.

We eat food and become hungry because our body can't generate energy just by itself. We think it urgent to procreate because we have a limited lifespan and our body becomes increasingly useless over time. We only really have a few decades to complete our education before the whole thing becomes increasingly awkward. Our propensity to explore and travel is likely to be just an offshoot of these things. We build dominance hierarchies and we compete with each other because of a sense of scarcity, no matter how unrealistic or out-of-touch it may be – or how artificially manufactured. There just wouldn't be much to do as a human if it wasn't for a certain sense of urgency and unease.

The Living and the Dead

If we truly lived forever, then we would eventually lose our willingness to explore, our willingness to have sex, our willingness to eat, our willingness to enjoy art and music, and so on. Because we would know it all, we would remember it all, and there'd be no pressure to do any of it. That's because all existence emerges from the threat of ultimate entropy.

As things stand now, the real reason for all of it might as well be "because it feels good". But does it? Does it feel good? How long do you think you can milk it? And how good is good enough? No. You're like this because this was necessary to survive and exist, in the context of vast amounts of time. But it's not like the genes care about you specifically. Do they?

Analysis is irrelevant with respect to motivation. Here's hoping you'll see why that is.

Let's see: the bit is the smallest unit of discernment. No matter what it means, you set it to mean the smallest thing. Then, you can take any length of sequence to mean something else, and any length of multiple sequences to mean yet something different. You could proceed like this until the end, or you could invent any kind of pattern you like to make different combinations or permutations mean something different yet again; the rule doesn't have to be linear, it could even be random in part or completely, and it could loop back to where you were before. That's all intelligibility amounts to. It's like producing more and more explanations ad infinitum, not unlike what journalists do or what gossipmongering is. That's human nature for you.

Consciousness isn't there any more for a particular reason than the reasons you could think of are there for the consciousness – for your amusement. That's because there are only three sides to a complete explanation: the mechanism of it, the purpose of it, and the reason for what propagated it. Look: it's your old friends Shiva, Vishnu, and Brahma, come to taunt your Western sensibilities. It's ancient Hinduism all over again, which seems almost perverse in the context of science. The only reason why you don't understand the profundity is that you grew up in a different cultural context, and you are only really satisfied by a specific kind of explanation.

It's always more about how you say it rather than what you say. It's true with respect to myself as much as anyone, and the marketeers have proven this time and again. It just means that generalization takes time, and is error-prone.

When there is no reason to think about something, you'll cease to think about it, and that'll be the case until you have at least some conscious experience. That's probably no coincidence.

And so, what about the other way around? When you have an experience, are you then also thinking about it? Or are you going to pull a fast one and say that it isn't really thinking per se – which is to say, would you like to analyze it further?

Would you like to become a vegetable? How can you tell you're not one already, one that's just having these grand delusions about being a human, moving around in space? Because what's required to think isn't what's required to really exist as such, and what's really there for you is just what you think there is. Your organs are obviously chugging along perfectly happy without knowing you're there, but so do many other things that form these composites all around the universe. And so, do you think planets have a rich mental life? After all, they get to suck up information from the surrounding space all day long. So what does it matter whether the information arrives through the ear, the eye, or the atmosphere?

It's blindness that makes you dead. A lack of attentiveness. And there are different degrees of that. Different degrees of death.

One gazes upon the vacant eyes of a stranger. One seeks understanding and common ground. Desperate. Seeking humanity, seeking purpose. It's not there. What you're looking for is gentleness, beauty, easily trampled and lost. A disturbed, numb nub that once served, responded and resonated, now atrophied, all that greets you. Rubbed and squeezed, raped and pumped for all its got for far too long. And look: how it's all discolored and starts to smell.

I guess the same thing happens every time a tremendous amount of energy is unleashed. It's just what becomes of you if you don't hold back.

About What It Means to Be Undermined

The Buddhists are said to have a saying that goes approximately like this: before enlightenment, chop wood, carry water; after enlightenment, chop wood, carry water. I don't remember where I first heard it, but it's something to think about. You see, it's directly applicable to anything regarding personal experience: no matter what you experience, you can't escape your body, your humanity. You have certain wants and needs and they refuse to go away. Which I suppose is the reason why certain fundamentalists even go so far as to say that the body is the greatest obstacle to enlightenment (opposing somewhat the quote).

So then if you're like me having realized this, you immediately start to wonder about other humans. You start wondering whether they're in fact capable of thinking a few steps forward in time, about what rewards could possibly justify certain actions, and you start wondering whether your country in fact does a good job taking care of you as well as others like you. You start wondering about what national debt means in the context of it all; about what could possibly bring a return on investment. You start to question whether all that these lusty people can see is exponential returns.

So much money that it'll be the end of money, singularity, revolution, chaotic noises, fuzzywuzzy razzmatazz. And at that moment, when the discussion is about nonsense and the phrase "just semantics" starts to make sense, you'll know: we're trapped, you and I both. Can we escape to a place where we can be safe? Just like when COVID first struck, people, like a crazed herd of animals, went ahead and cleared the supermarkets of toilet paper; just so it is now: a stampede of shortsightedness and unsustainable action.

You think you made the right choice when you studied at the university for five years of your life, maybe longer? You think you'll someday be rewarded for that? Maybe when you die and go to heaven, there'll be beautiful, slutty virgins waiting for you on the other side.

I think suicide bombers have something like that waiting for them as well.

It's always a good idea to sacrifice, isn't it? A good idea to defer pleasure for exponential gains. If not, then what would you teach to your children? Unless of course, you won't have any. So let's think about this for but a brief moment more. Just a few paragraphs left, I promise. After that you can go back to masturbating or whatever you were working on.

Remember what I just said about conscious experiences? Let's say you work for a living. You get paid because you did such a good job for being such a good boy or girl. You did everything they've always told you should do, and that's what the numbers ascertain you when you go to your online bank with your smartphone. Happy day! Made bank, made dough.

But what about the other way around? The numbers suddenly change but does it mean you did a good job? You're all confused. In Finnish we say "tuli jännäkakka housuun". In English it means you're rather a little too excited. You did good by doing nothing at all? Wow! And who's there to tell you what a "good job" even is? All your life you've struggled to finally accomplish your dream of becoming an office worker, to make your mother proud, and now the nerds decide to move in for the deathblow? Why couldn't you have been born just a few decades earlier? Back when you could still make money doing what you actually love.

You have your nice little house built on top of a little hill, and then your dear old neighbor comes in and digs a gaping hole underneath it. What do you think is going to happen?

About Diminishing Returns

Here's a couple of articles I've found when perusing the subject of the limits of AI scalability, maybe you've read them or maybe not. Feel free to click on these links at your pleasure, or don't. I don't mind.

https://arxiv.org/pdf/2504.00294?

https://arxiv.org/pdf/2001.08361/1000

https://arxiv.org/pdf/1712.00409.

Anyway, the take-home message for me, loosely speaking, is that the increased effectiveness, whether you think about in terms of compute, data, or model size (or all of the above) is a power function with an exponent < 1 (usually quite a bit less, in fact). That is, the error decreases more and more slowly. It kinda reminds me of how in perceptual psychology the Weber-Fechner law and Stevens' power law are these two camps about the power of sensory stimulus against the intensity of sensory experience. Turns out when you have neurons, artificial or not, the macro-level functions are practically the same. It's both curious and ironic to me, because the "intensity of sensory experience" might as well be framed as "the intensity of knowing something", which would generally be a function of information. That being said though, I've heard there's more going on in the brain than simple backpropagation.

Training a modern frontier model consumes enough electricity to make an energy economist quietly pour a drink.

So actually, I'll be honest and admit I don't know whether it's worth it. It's diminishing returns and it's not. It is in the sense that pouring more energy into it eventually gets way too expensive for anyone. However, it's not if there's actually something specific you need to achieve a certain amount of accuracy for, because then it's either you reach it or not. I don't think I've really seen too many genuinely critical writings about it though. So maybe other people know something I don't. You tell me, I've my email on the about page.

Usually people make an example out of improved healthcare, and indeed in my home country too there have been some movements in that area. But if it comes to medical doctors consulting an AI system and going "yes, I agree", then how does that differ from doctors consulting Wikipedia and going "yes, I agree"? Just lace a Wikipedia-like system with some programming, and then presto, you've got a poor man's AI equivalent. Not to mention it would be more transparent, and any mistakes would be easy to fix. Really, the thing you might use AI for is actually proofreading any syntheses beforehand. That sounds like a reasonable thing to do. But you're a skeptical person, just like me. You want to know, how often do doctors get it wrong unless an AI system is there to scold them? Because at the heart of it, it feels different when using a talking individual to just using a pile of text. As time approaches infinity, both should arrive at the same conclusions...

So where would you like all of humanity's knowledge to reside, then? Would it be best to store it in books? Or perhaps on several different web pages? Or maybe it should reside inside the bowels of a humongous AI model, such that only the largest of tech companies are even equipped to access it. Or maybe, just maybe, we should discard all external storage formats and rather convey all that's worth knowing about through song and poetry recitations. Why, you ask?

Well, it's because if you have to be a doctor to work as a doctor, then what the hell does it matter where you get your information from? Or don't. So long as it's veridical, apodictic, probative and true.

I know, I know. Saving money. Efficiency, that's the thing. So it's that before we even think to use a poor man's tech stack, we think like a poor man and look for where to cut. Simply amazing. Just so in the long run, chances are you too will be undermined.

But if it doesn't matter, then why does it matter so much? And if it's so important, then why is it constantly in jeopardy? If it really was all about the knowledge alone, then given tools powerful enough, I too should be allowed to practice medicine. Hell, it should be possible to pick up someone off the street and make them a professor of medicine, or a CEO of a giant international company. And look how the knife twists there: a human cannot even be as accurate as a machine when it comes to image recognition tasks.

You cry to the heavens in agony: "why does that guy get to choose!?" And then, a solitary tear streaking across your face you turn towards the gun. It whispers to you its sweet song of liberation. "Fuck this shit," it says.

And so, under the watchful gaze of the sun, all beings grow to meet their limits.

About the Practical Nuances of LLMs

I've been using large language models for about a year now, maybe a bit longer. I wasn't among any early adopters, but I can see they're here to stay. And I've learned that practically all of it is a lie.

Here's the biggest one, in my opinion: chances are, you can run your own LLM on your home PC (or Mac, if you think that's categorically different). Nay, it gets worse than that: you can run them on your laptop, on your phone even, if it's relatively powerful for today's standards.

So what's the lie about that? That's just great technological progress, you say. Well, no. It means you don't really need ChatGPT or Claude or whatever web UIs might exist now or in the future. It means that sure, the next Microsoft Windows might have some AI faculties baked in, and it's almost certainly because they want to preemptively undermine any and all userspace LLM programs that might be sold while using their OS. It's how practically all commercial operating systems have done things for decades: all the really useful programs become packaged as part of the standard userspace program collection, and so you can't really make money creating software anymore, unless you work for Microsoft, or Apple, or some other part of the imperialist chain. What you'd literally have to do to break it, is to create your own entire OS with your own peripheral programs, just for the sake of selling those peripheral programs – unless you either get lucky or manage to sell online subscriptions...

Don't get me wrong, Linux is great. But only as long as it empowers poor fuckers like me to run their servers and a desktop.

But I digress, let me yap some more about LLMs, since this section is supposed to be about those.

Another lie is that all LLMs are the same. Not really. It's a tremendously complex world, filled with details about different quantization techniques and training techniques and whatnot, which are supposed to make a difference regarding the resulting model. Mostly, for regular, simple queries like "what should I have for dinner", you could make do with the simplest of models, almost irrespective of how it was built. But the real lie about that, as far as I'm concerned, is that for the model to be truly useful, it needs to be fricken gigantic. The kind of size that barely fits into your computer's memory. It will fit, though, and it will be mostly as good as any of the ones you use with a web UI. And it's because of diminishing returns; I'd bet on that, because that's my experience running these things.

The problem is that you only really benefit if the model either reminds you of things you forgot or it teaches you something you didn't already know. And it should do that without any errors, or you'll spend half an afternoon trying to get an answer to a simple question. Otherwise you could replace the entire thing with a simpler solution that's far lighter on resources.

So when people gasp about how AI knows how to code...do you think I'll buy the model from you? Or do you think I'll get it off Huggingface? Or should I subject myself to Microsoft's imperialism and wait for their solution, probably around the dawn of Windows 12? And did you know, that Windows 10 was actually supposed to be the last Windows?

About How You Will Be Replaced by AI

No you won't.

AI is used for one purpose only: to benefit the owner. The owner is not replaced. The owner makes sure of that.

Predictions

Here's some predictions. I'm curious to see which ones become true and to what extent.

  1. You'll be able to run any kind of ML model on your phone or whatever computer you've got, not just generic ones. That means every single human has their personal medical doctor, teacher, counselor, and really anything at all at their fingertips anywhere. They're taking a shit and they can get competent legal advice.
  2. People start to discriminate each other more and more based on superficialities, because that's really the only thing left to discriminate. Like it or not, discrimination has always served a purpose. Or at least, it's intended to. Anti-discrimination too has its purposes.
  3. Anyone wants to be a leader has to make sure nobody will overthrow them. That means more surveillance. I'm surprised your ass crack isn't already riddled with microphones. So what I'm saying is what the conspiracy theorists should really be scared about is not so much increased mortality due to suspicious vaccines, but advanced techniques used to suppress threats to hegemonies. It can get incredibly subtle.
  4. You'd think the GPU is king and really the next-level processor. Do you even need a CPU anymore? But hey, if neural accelerators become a contender, then you won't need a GPU; so which one is going to win in the end? I'm saying the CPU is all you really need. Maybe in the long run, GPUs will actually disappear, because all they do is complicate the tech stack, and they're expensive. Wouldn't you rather just get a CPU that can do it all? Nobody wants to depend on CUDA...
  5. Generative algorithms become so proliferated there's no need to learn an instrument or paint anymore. That's going to undermine human creativity. What you don't need is going to slowly disappear and become vestigial. It's only a matter of time. Come to think of it, there's lots you don't really need...like eyes, nose, and a mouth.
  6. No matter how sophisticated an algorithm, it works better in knowledgeable hands. In the future, the only humans that'll survive are the ones who can exploit AI, and all the others will be supressed to the margins. That's because culture is the greatest evolutionary pressure. In other words, you got computer programmers and whores in the future. Happy day!
  7. Gnu/Linux popularity rides or dies on the ML wave. But the moment someone markets an interesting product that uses another OS, then that becomes popular. The reason you need a kernel at all is because hardware is difficult. Supercomputers will still run on Linux though, because saying they run on Linux is like saying "they run on best practices." Linux is open-source.
  8. We finally reach a point in technological development and sophistication that makes things like training, study, and talent irrelevant. And so the strong will again trample the weak, and the beautiful will scorn the ugly. It's because that's all God gave you that has any meaning left to other humans. Just like the cavemen must have done, back when they roamed the earth. Pounding each other on the face with giant clubs and such.