| |
 | Posted by jmh530 in reply to GrimMaple | Permalink Reply |
|
jmh530 
Posted in reply to GrimMaple
| On Thursday, 10 July 2025 at 12:55:42 UTC, GrimMaple wrote:
> On Wednesday, 9 July 2025 at 14:04:25 UTC, Lance Bachmeier wrote:
> [..]
I guess it's just a big gray area, because nowadays it's not really regulated by the law, so it can go anywhere from here. But to stay on topic, I must note how LLMs don't write good D code 😅 They usually get something super basic done well, but most often I find that the LLM output code needs to be fixed before it can even compile. The other day DeepSeek tried to gaslight me into believing there's no zip in std , and, once corrected, output utter uncompilable rubbish :)
Usually no such problem with C++ or C#.
I tried getting ChatGPT to write D last year and it was a frustrating experience. I subscribe to the plus version now*, but haven't tried it with D code. Given some other frustrating experiences with it, I don't have high hopes unless there is some kind of new breakthrough in the technology**.
*4o certainly gives better answers than the base model, but I can't trust it
**It's like they spent all their time grinding for the final exam, they know the textbook backwards and forwards and can do all the problem sets perfectly, but don't really comprehend what they're doing and are just regurgitating what they read. So when they try to do something not in the textbook or the problem sets, it can be like trying to fit a square peg in a round hole.
|