July 10

On Wednesday, 9 July 2025 at 14:04:25 UTC, Lance Bachmeier wrote:

>

[..]

I guess it's just a big gray area, because nowadays it's not really regulated by the law, so it can go anywhere from here. But to stay on topic, I must note how LLMs don't write good D code 😅 They usually get something super basic done well, but most often I find that the LLM output code needs to be fixed before it can even compile. The other day DeepSeek tried to gaslight me into believing there's no zip in std, and, once corrected, output utter uncompilable rubbish :)

Usually no such problem with C++ or C#.

July 10

On Thursday, 10 July 2025 at 12:55:42 UTC, GrimMaple wrote:

>

On Wednesday, 9 July 2025 at 14:04:25 UTC, Lance Bachmeier wrote:

>

[..]

I guess it's just a big gray area, because nowadays it's not really regulated by the law, so it can go anywhere from here. But to stay on topic, I must note how LLMs don't write good D code 😅 They usually get something super basic done well, but most often I find that the LLM output code needs to be fixed before it can even compile. The other day DeepSeek tried to gaslight me into believing there's no zip in std, and, once corrected, output utter uncompilable rubbish :)

Usually no such problem with C++ or C#.

I tried getting ChatGPT to write D last year and it was a frustrating experience. I subscribe to the plus version now*, but haven't tried it with D code. Given some other frustrating experiences with it, I don't have high hopes unless there is some kind of new breakthrough in the technology**.

*4o certainly gives better answers than the base model, but I can't trust it
**It's like they spent all their time grinding for the final exam, they know the textbook backwards and forwards and can do all the problem sets perfectly, but don't really comprehend what they're doing and are just regurgitating what they read. So when they try to do something not in the textbook or the problem sets, it can be like trying to fit a square peg in a round hole.

July 10
> On Thursday, 10 July 2025 at 12:55:42 UTC, GrimMaple wrote:
[...]
> > I guess it's just a big gray area, because nowadays it's not _really_ regulated by the law, so it can go anywhere from here. But to stay on topic, I must note how LLMs don't write good D code 😅 They usually get something super basic done well, but most often I find that the LLM output code needs to be fixed before it can even compile. The other day DeepSeek tried to gaslight me into believing there's no `zip` in `std`, and, once corrected, output utter uncompilable rubbish :)
> > 
> > Usually no such problem with C++ or C#.

That's because the training data probably included a lot more C++/C# code than D code.

The problem with today's LLMs is basically that they are essentially interpolation functions, if rather sophisticated ones.  If something was in their training data, you're likely to get something useful out of it. But otherwise, it's anybody's guess what comes out.  Probably total rubbish.  Because LLMs do not actually have a model of comprehension; they only have a model of what output is more probabilistically likely given the input.  There are no semantics attached anywhere.


T

-- 
Study gravitation, it's a field with a lot of potential.
July 10

On Thursday, 10 July 2025 at 14:06:06 UTC, jmh530 wrote:

>

I tried getting ChatGPT to write D last year and it was a frustrating experience. I subscribe to the plus version now*, but haven't tried it with D code. Given some other frustrating experiences with it, I don't have high hopes unless there is some kind of new breakthrough in the technology**.

I've had good luck with Github Copilot Pro, which I can use for free as an academic. LLMs have become a lot more useful in the last year.

I've had great luck with Copilot on my D code. That's not because I can give it a prompt and then compile and use the 3000 lines of code it spits out. I can't even do that with Javascript. It has detailed knowledge of the compiler internals, the standard library, the spec, and third-party libraries on Github. I use it to write documentation, to write small helper functions, to write templates, do reviews of my code, and that sort of thing. I find the code reviews very helpful.

Until recently, I never understood how to write my own ranges that could be used like this: foreach(index, value; myrange). It is a great example of using LLMs the right way. It wrote a short block of code. I then had no choice but to fully understand everything in the block so I could fix the bug. Another thing is interacting with DLLs on Windows.

July 10

On Thursday, 10 July 2025 at 16:13:18 UTC, Lance Bachmeier wrote:

>

[snip]

Until recently, I never understood how to write my own ranges that could be used like this: foreach(index, value; myrange). It is a great example of using LLMs the right way. It wrote a short block of code. I then had no choice but to fully understand everything in the block so I could fix the bug. Another thing is interacting with DLLs on Windows.

Yeah, those are good use cases. How to implement something like opApply or deal with DLLs on Windows is probably going to be in the training set (or close enough that it can be helpful). I'm not saying never use it, but I would say that it's been mostly a waste of time to try to get it to give me the right answer after it was wrong initially.

July 10

On Thursday, 10 July 2025 at 16:13:18 UTC, Lance Bachmeier wrote:

>

Until recently, I never understood how to write my own ranges that could be used like this: foreach(index, value; myrange). It is a great example of using LLMs the right way. It wrote a short block of code. I then had no choice but to fully understand everything in the block so I could fix the bug. Another thing is interacting with DLLs on Windows.

I wonder if it stole my answer from the forum

1 2 3
Next ›   Last »