September 24

On Monday, 23 September 2024 at 08:46:30 UTC, aberba wrote:

>

You would be surprised how much original code and code modifications LLMs can output. I wouldn't be to quick to dismiss them as mere translation tools.

For example, take a look at the intro video on the Zed homepage to see what can be achieved with AI assisted coding (https://zed.dev/)

I've seen that. My point is: while AI sometimes can really look great doing something, people should always keep in mind it's just a complex math intended to generate specific patterns. It's not intelligent, it doesn't really understand any context, neither it understands anything it outputs. Image generation is a great example: there are lot of nice images done by AI but there are also tons of garbage produced - with wrong limbs, distorted faces, etc, etc. General-use ChatGPT answers with lots of text meaning barely anything or swapping topics is another great example. And while you sometimes can be fine with some small mistakes in image, coding has no room for that.

So, my personal opinion: AI can be great in generating some repetitive or well-defined code to do some typing instead of human, but it still needs a good programmer to ensure all results are correct.

September 26

On Tuesday, 24 September 2024 at 07:23:26 UTC, Vladimir Marchevsky wrote:

>

On Monday, 23 September 2024 at 08:46:30 UTC, aberba wrote:

>

You would be surprised how much original code and code modifications LLMs can output. I wouldn't be to quick to dismiss them as mere translation tools.

For example, take a look at the intro video on the Zed homepage to see what can be achieved with AI assisted coding (https://zed.dev/)

I've seen that. My point is: while AI sometimes can really look great doing something, people should always keep in mind it's just a complex math intended to generate specific patterns. It's not intelligent

If somebody implemented intelligence as an algorithm, what form would you expect it to take other than "complex math generating specific patterns"?

>

it doesn't really understand any context, neither it understands anything it outputs.

You can disprove this to yourself by just talking to it. Have a chat, have it explain what it was going for. Doesn't always work reliably, but that there's no understanding there is easily disproven.

>

Image generation is a great example: there are lot of nice images done by AI but there are also tons of garbage produced - with wrong limbs, distorted faces, etc, etc.

It should be noted that the text models used by image generators are, by current-year standards, absolutely tiny. Like, GPT-2 tier. It does not surprise me that they don't understand things, nor does it say anything about the chat models, which can be a hundred times or more bigger.

>

General-use ChatGPT answers with lots of text meaning barely anything or swapping topics is another great example. And while you sometimes can be fine with some small mistakes in image, coding has no room for that.

As usual - make sure you're using GPT-4 not 3.5!

The question isn't "does it make mistakes", the question is "does it make more mistakes than I do." And in my experience, Sonnet makes less. His code compiles a lot more reliably than mine does!

>

So, my personal opinion: AI can be great in generating some repetitive or well-defined code to do some typing instead of human, but it still needs a good programmer to ensure all results are correct.

Well, that's the case anyways.

September 26

On Tuesday, 24 September 2024 at 05:30:32 UTC, Imperatorn wrote:

>

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

tl;dr: D port of powerline-shell, a beautiful command prompt with (among others) git status https://github.com/FeepingCreature/powerline-d

[...]

Nice, did you use ImportC also?

Yep! ImportC is responsible for the libgit2 bindings so I can avoid calling the git client manually every time I want some info about the current directory. After I got the concept, it was smooth sailing; The DMD version differences are a bit awkward, but that's normal for a fairly new feature.

September 26

On Thursday, 26 September 2024 at 06:58:28 UTC, FeepingCreature wrote:

>

If somebody implemented intelligence as an algorithm, what form would you expect it to take other than "complex math generating specific patterns"?

"If"s are the matter of sci-fi. We are talking about usage of existing software.

>

You can disprove this to yourself by just talking to it. Have a chat, have it explain what it was going for. Doesn't always work reliably, but that there's no understanding there is easily disproven.

"Doesn't work reliably" is an actual proof of it having no understanding. It's patterns just sometimes match common logic (or you imagine the reasons behind it). If some kid sometimes called a specific animal a cat and other times called the same animal a dog, you'll be sure that kid has no idea what cats and dogs actually are.

>

It should be noted that the text models used by image generators are, by current-year standards, absolutely tiny. Like, GPT-2 tier. It does not surprise me that they don't understand things, nor does it say anything about the chat models, which can be a hundred times or more bigger.

Extremely advanced waxwork is still a waxwork, even when it looks like a real food or person. It doesn't matter how many patterns you put into model, it still just randomly mimics those patterns.

>

The question isn't "does it make mistakes", the question is "does it make more mistakes than I do." And in my experience, Sonnet makes less. His code compiles a lot more reliably than mine does!

I'll keep that without response, I think...

September 26

On Thursday, 26 September 2024 at 06:58:28 UTC, FeepingCreature wrote:

>

You can disprove this to yourself by just talking to it. Have a chat, have it explain what it was going for. Doesn't always work reliably, but that there's no understanding there is easily disproven.

I don't know whether images can be used here, so just a text quote of ChatGPT 4o:

>
  • Human brain weights about 1400 grams. Hamster brain weights about 1.4 grams. How many hamsters are needed to create one human brain?
  • To determine how many hamsters' brains would equal the weight of one human brain, you can divide the weight of a human brain by the weight of a hamster brain: [weight calculations here]. So, it would take 1,000 hamsters' brains to match the weight of one human brain.

It's completely obvious how there is no understanding of context at all. It just matches a simple weight calculation pattern and completely ignores a clear context of an actual question that asks something else. You can imagine the effect of such "off-topic" somewhere deep in the complex generated code. I wouldn't trust this to write any code that is not triple-checked by real programmers afterwards...

September 27

On Thursday, 26 September 2024 at 06:58:28 UTC, FeepingCreature wrote:

>

If somebody implemented intelligence as an algorithm, what form would you expect it to take other than "complex math generating specific patterns"?

Approximation to the halting problem + code generation + value heavy assessment of data

An nn is a const time machine, any outside influence is not holistically communicating how it should be spending its time or changing previsous output. If you ask "is 2+2=4" and the first piece of output is 'n' by roll of the dice, it has already decided to say "no" it will vomit out 5 paragraphs defending it.

A general intelligence will at least be able to do everything a human can, and humans have backspace, humans can notes when they want to, humans can spend a full life time on one problem or declare something impossible in a second.

October 01

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

Now there's a few rewrites, one in Rust, one in Go, but I mean - we're a D shop, and I'll be damned if I make critical workflow dependent on a Rust tool. But hey, the source isn't that big - only 2116 lines spread over 43 files. 61KB of source code. That comfortably fits in the context window of Claude 3.5 Sonnet.

Obvious question. Can it riir?

October 14

On Tuesday, 1 October 2024 at 05:59:30 UTC, Kagamin wrote:

>

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

Now there's a few rewrites, one in Rust, one in Go, but I mean - we're a D shop, and I'll be damned if I make critical workflow dependent on a Rust tool. But hey, the source isn't that big - only 2116 lines spread over 43 files. 61KB of source code. That comfortably fits in the context window of Claude 3.5 Sonnet.

Obvious question. Can it riir?

Lol probably. I think the more visible typing and structure there is, the more it struggles. It has problems with D imports too. Humans can go back and add something to the top of the line - a LLM has to keep typing, front to bottom.

Honestly to be optimal for LLMs, function parameters should be at the bottom.

>

"Doesn't work reliably" is an actual proof of it having no understanding. It's patterns just sometimes match common logic (or you imagine the reasons behind it).

Honestly, this applies exactly to humans. We are just better at noticing when our patterns break and correcting ourselves.

Since I've started using LLMs, I've frequently noticed myself saying blatantly wrong things and going "heh, stochastic parrot moment". It's very clear to me that my language generation is just a next-token predictor with a bunch of sanity checks. Sometimes those checks take a bit too long to kick in and I misspeak.

They'll get there.

>

If some kid sometimes called a specific animal a cat and other times called the same animal a dog, you'll be sure that kid has no idea what cats and dogs actually are.

No? You'd say it's confused about cats and dogs. It has some model, or else it would call them a toaster or a water with equal frequency.

1 2
Next ›   Last »