summaryrefslogtreecommitdiff
path: root/content/blog/2024/are-llms-worth-it.md
blob: 6600e5cada015f745625375fafea356e75e9512f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
+++
title = "Are the LLMs worth it?"
author = ["Michał Sapka"]
date = 2024-06-30T20:06:00+02:00
categories = ["blog"]
draft = false
weight = 2002
abstract = "What is the cost and what is the value here?"
+++

Recently, half of my waking time seems to be spent actively ignoring the current AI boom.
Every investor, every product, even some open source project - all are part of the biggest craze I've ever witnessed.
My skepticism for all this _crap_ doesn't come from being against the idea behind it.
I love AI, but LLMs are different.

How do you measure if something is worth it?
I guess it's something akin to a simple equation:

```shell
c = v * N
```

Where
c - cost
v - value
N - your personal acceptance factor

So, a product is worth the cost, if it gives enough value based on how much you are willing accept.
What is the cost?
It's hard to tell.
What is value?
It's even harder to tell.

When it comes to LLMs, the cost is _huge_.
It may very much be the most costly tech we will see in our lives.
It costs very little when we limit ourselves to _monetary value_.
It's a few bucks here, a few bucks there.
Anyone into subscriptions most likely already has bigger sinkholes.

But that approach is significantly shortsighted; reduced to minimum.
The actual cost of the technology is far more reaching.

Firstly, the development.
VCs all around the globe can't throw their money fast enough.
It seems that just adding "AI" to a shampoo gives significant chance of a hefty A series.
So much of the current world economy is based on the idea that one day all that investment will be paid.
Just look at the stock value of Nvidia!
It seems that just yesterday they were known primarily for overpriced graphic card for gamers.
But they were lucky, their product powered the last two big crazes - first crypto, now LLMs.

The second aspect of this part of life cycle is the man power.
Just how many engineers are now tasked with integrating every, imaginable system with ChatGPT?
And how many product managers are forced to think of an idea of what that integration is supposed to do?
Those product are, almost aways, _worse_ in the process.
Even something as simple as Expensify became unusable when they changed their product into a chatbot.

And then comes the running cost that will be paid by our children.
The environmental impact is unimaginable.
We're producing huge amounts of air pollution, using water and rare minerals like there's no tomorrow.
For the last century or so, we've abused our planet, but then we've decided that enough is enough.
Time to think of survival and become _green_.
All this is thrown out the window.
No survival of human race will stop Altman from becoming a multi billioner!

Then there are the social costs.
We're just starting to see them, but it's already dreadfully.
LLM companies are treating all human creation as a free buffet, ignoring everything we've had before.
The web, as we grew to know and love, may soon come to an end.
Sometimes even I have hard time justifying existence of this site.
I have a lot of fun, do something here almost daily.
But in reality I make the _assholes_ richer, as they will harvest everything here and put in their huge data models.

And, of course, people are starting to lose their jobs.
Not that many yet, as the CEOs need to be crazy enough, but it's starting.
And it will only get worse, as more and more product _promise_ automation.
Most likely it will happen before the promise is fulfilled, but that's beside the point.

The costs are huge, but what are the benefits?
I've tried to read what all the proponents say, and the responses I've found differ very slightly:
some small auto generation, here and there.
Nothing grand, a few dozen minutes of manual labor each time - unless we're talking about image/multimedia generation.
You ma even get what you want it to be, assuming you've already got some experience in prompting.
Mind you, when an LLM works, it works like magic.
It is, however, not the case most of the time.
You get a result, you try to convince the machine that there is something wrong, you get a result, you find something and so on.
Up until it's ready for manual fixes.

Looking at this, we've got a huge `c`, very small `v`.
If you say that this is worth it, your `N` needs to be huge.

But there's this notion, that we're just at the start.
We've got the first version and there will be huge breakthroughs soon.
Either the cost needs to be decreased, or the value needs to skyrocket.
Both are possible.
There's enough money thrown for there to be more people working on those problems than there are scientists working on curing cancer.
Will they achieve anything of value?
We can _guess_, but guessing will not change the fact, that the current [bullshit machines](https://link.springer.com/article/10.1007/s10676-024-09775-5) are **not** worth it.

But let's be realistic here: betting on _scientific breakthrough_ is not something a sane mind should do.
Especially not one data-driven!
Events like this rarely happen, and the unravel ChatGPT 2 proved that we've already witnessed one, just a few short years ago.
The next one may take years, or decades.
We will be paying the cost until the LLM craze ends, and perhaps it will not even destroy us before that.

Personally, I am sure that AI Winter is inevitable.
And it will be winter like we've never seen before, as never before so much money was thrown into furnace.
There are _great_ uses of AI - just think of image recognition, or weather forecasting.
All of this may be lost, as one fucking Altman stole all the air from the room, just to burn us all in the process.
I am afraid that the winter will not only stop significant work on LLMs, but for general Artificial Intelligence.

Note, that I ignore all the _risks_ here, and just look a the technology.
But the costs are even higher, as we should put things which it enables - disinformation for one.