summaryrefslogtreecommitdiff
path: root/content/blog/2024/was-ai-radio.md
blob: ad2c88daeba17f4de6e4272793a287cce9c7d2cc (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
+++
title = "AI radio was straight out of a nightmare"
author = ["Michał Sapka"]
date = 2024-10-31T22:09:00+01:00
categories = ["blog"]
draft = false
weight = 2001
image_dir = "blog/images"
image_max_width = 600
Abstract = "OFF Radio is no longer AI Radio"
Listening = "[podcast] 2.5 Admins 219: Spooky Stories"
Listening_Url = "https://2.5admins.com/2-5-admins-219/"
+++

We, at Crys Jurnal are happy to report... no, I'm not doing such scary Halloween.

A few days ago, I wrote [about AI radio](/blog/2024/ai-radio/).
Well, we can now talk about it in past tense as they gave up.
The response was so negative, that they are no longer doing it.
This is the happy part.

But they are not the only ones doing such "experiments".
LLMs may have proven themselves to be unreliable doing anything, but this won't stop evil people from using it everywhere.
We can only oppose.

In a completely unrelated story...

Recently, at my day job I was tasked with converting some Scala code to Ruby.
I tried to do it manually, to actually understand what the hell am I committing.
But at two occasions, I gave up and asked Chat GPT to rewrite a method 1:1.
It did it poorly, but after some back and forth accompanied by cursing, it worked.
I've been told, that even for Scala devs the code was convoluted.
But at a result, the Ruby code was convoluted.
It looked like they hired a Java guy to write Ruby.
It works, technically it's correct... but it's not Ruby.
Therefore, I added a comment

> Warning. This method was converted from Scala code by LLM

I may have played with the devil, but this allowed me to feel better about it.
I even had to explain myself during patch review.
One thing I didn't do, was to normalize its usage.
But, in the end, I noticed that it didn't saved me any time.
I still needed to refactor it, understand what the original code did, and test it.
All it did was adding uncertainty.

So, in my book, one of things LLMs can't do reliably is helping in coding.
This was the first time I've tried to use Altman to help me in work, and it was a failure.
Just like AI Radio.
This is the sad part.