Jump to page: 1 2
Thread overview
opensource ai agents that airnt useless with d
Oct 16
monkyyy
Oct 16
Sergey
Oct 16
monkyyy
Oct 17
Serg Gini
Oct 17
jmh530
6 days ago
monkyyy
Oct 17
Serg Gini
October 16

The "zed" free trail is able to write a small d program by check if it runs and it will compile... but its a free trail and it auto ran formatting and its a fucking bloated ide, that makes my fans spin, that is half broken scaling. Etc.

So its probably time to start poking at the open source tooling and ripping out the stupidity and giving it the d docs directly.

Who's doing what? Whats handles d to some degree?

October 16

On Thursday, 16 October 2025 at 20:19:56 UTC, monkyyy wrote:

>

So its probably time to start poking at the open source tooling and ripping out the stupidity and giving it the d docs directly.

Who's doing what? Whats handles d to some degree?

Hey monkyyy.

I've made a D port of Python project that is taking the whole github source and putting it into one huge formatted document, so AI agents will be able to use it effectively [1]

I think it is possible to use similar approach with Ddocs - generated for the project and directly shared into the context/system prompt.

I've seen similar idea was done by one rust dev [2], [3].
Especially as some Agentic CLI tools are becoming more popular - so no bloated IDE will be required in theory.

References:

  1. https://github.com/cyrusmsk/rendergit-d
  2. https://github.com/cortesi/ruskel
  3. https://github.com/cortesi/dox
October 16

On Thursday, 16 October 2025 at 21:02:15 UTC, Sergey wrote:

>

On Thursday, 16 October 2025 at 20:19:56 UTC, monkyyy wrote:

>

So its probably time to start poking at the open source tooling and ripping out the stupidity and giving it the d docs directly.

Who's doing what? Whats handles d to some degree?

Hey monkyyy.

I've made a D port of Python project that is taking the whole github source and putting it into one huge formatted document, so AI agents will be able to use it effectively [1]

I think it is possible to use similar approach with Ddocs - generated for the project and directly shared into the context/system prompt.

I've seen similar idea was done by one rust dev [2], [3].
Especially as some Agentic CLI tools are becoming more popular - so no bloated IDE will be required in theory.

References:

  1. https://github.com/cyrusmsk/rendergit-d
  2. https://github.com/cortesi/ruskel
  3. https://github.com/cortesi/dox

why html, markdown exists?
and your not answering what ai models your using

October 17

On Thursday, 16 October 2025 at 20:19:56 UTC, monkyyy wrote:

>

The "zed" free trail is able to write a small d program by check if it runs and it will compile... but its a free trail and it auto ran formatting and its a fucking bloated ide, that makes my fans spin, that is half broken scaling. Etc.

So its probably time to start poking at the open source tooling and ripping out the stupidity and giving it the d docs directly.

Who's doing what? Whats handles d to some degree?

I want to create a custom GPT by uploading the D spec and a few other resources and see if that was any good. But haven’t had the time. My initial enthusiasm for ChatGPT 5 has waned a bit over the past few weeks since I’m getting similar kinds of mistakes I had on GPT 4 with certain types of questions.

The weird thing is that it’s so good at a lot of things, but is still dumb on others. Still gotta be careful.

October 17

On Friday, 17 October 2025 at 00:36:21 UTC, jmh530 wrote:

>

The weird thing is that it’s so good at a lot of things, but is still dumb on others. Still gotta be careful.

That will be true 30 years from now. I'm a big believer in their usefulness, but don't expect they'll ever stop surprising us. That's a good thing because it forces us to understand what's going on.

October 17

On Thursday, 16 October 2025 at 20:19:56 UTC, monkyyy wrote:

>

The "zed" free trail is able to write a small d program by check if it runs and it will compile... but its a free trail and it auto ran formatting and its a fucking bloated ide, that makes my fans spin, that is half broken scaling. Etc.

So its probably time to start poking at the open source tooling and ripping out the stupidity and giving it the d docs directly.

Who's doing what? Whats handles d to some degree?

I know it's not an agent but this is how I use AI.

I am using llama.cpp locally and I run

Qwen3-Coder-30B-A3B-Instruct-Q5_K_M
gpt-oss-20b-Q5_K_M.gguf
gpt-oss-20b-Q4_K_M.gguf
Qwen3-30B-A3B-Thinking-2507-Q6_K.gguf

For my uses, the Qwen models have been the best for D, I aim to try most models released on huggingface.

But most models do not know D well. It's easy for them to go off on a tangent and start saying the compiler is too old or the library is wrong.

Would be really nice to get a fine tuned model on the D language and library. Generating the documentation in plain text form to aid training would help.

I use codeblocks for editing and project management, fossil for source code control.

My workflow is to prepare the query in a codeblocks window and then paste into llama html query page. If I have a D module that works and I know it doesn't need modifying, I generate simple documentation (comments, function signatures) for that module and paste that instead of the source to reduce context use.

It takes seconds to copy the result paste into codeblocks and hit build.

I have projects generated that have more than 3k lines of code over multiple modules. Even if it's simple code the speed at which the AI can generate the code beats typing.

Regards,
Mark.

October 17

On Thursday, 16 October 2025 at 23:01:14 UTC, monkyyy wrote:

>

On Thursday, 16 October 2025 at 21:02:15 UTC, Sergey wrote:
why html, markdown exists?
and your not answering what ai models your using

you can use any model with this approach.
This is helping to add source code of your dependencies to the context and ask questions with this info.
I've tried Gemini from the big one.

For the local models and open source - I've played only with Llama3
But now there are better models are available. I've heard many good things about Qwen.

But they are popping up every week.. So most of the inference engines will be able to run most of them.

October 17

On Thursday, 16 October 2025 at 20:19:56 UTC, monkyyy wrote:

>

The "zed" free trail is able to write a small d program by check if it runs and it will compile... but its a free trail and it auto ran formatting and its a fucking bloated ide, that makes my fans spin, that is half broken scaling. Etc.

So its probably time to start poking at the open source tooling and ripping out the stupidity and giving it the d docs directly.

Who's doing what? Whats handles d to some degree?

I should have mentioned in my previous reply but one of the code blobs I generated is the core of an agent written in DLang, it has websocket interface to the html front end and openai interface to llama.cpp. It has tool calling support and tracks conversations. I've kind of put it on the back burner as I can't get tool calling working in the way that I want.

All the code was generated by AI models.

Mark.

October 17

On Friday, 17 October 2025 at 07:53:58 UTC, Mark Davies wrote:

>

I should have mentioned in my previous reply but one of the code blobs I generated is the core of an agent written in DLang, it has websocket interface to the html front end and openai interface to llama.cpp. It has tool calling support and tracks conversations. I've kind of put it on the back burner as I can't get tool calling working in the way that I want.

All the code was generated by AI models.

Mark.

Sounds nice.

How do you handle async tasks for agents?

October 17

On Friday, 17 October 2025 at 08:38:11 UTC, Serg Gini wrote:

>

On Friday, 17 October 2025 at 07:53:58 UTC, Mark Davies wrote:

>

I should have mentioned in my previous reply but one of the code blobs I generated is the core of an agent written in DLang, it has websocket interface to the html front end and openai interface to llama.cpp. It has tool calling support and tracks conversations. I've kind of put it on the back burner as I can't get tool calling working in the way that I want.

All the code was generated by AI models.

Mark.

Sounds nice.

How do you handle async tasks for agents?

The websocket, web interface and openai interface run as separate threads with message passing by an internal mail system, tool calls are intercepted by the openai interface and external tasks are spawned as shell commands via pipe processes. There is a sub channel in the websocket messaging system for passing messages to and from the front end for task/internal status while text from the AI gets sent plain. This isn't fully complete as I was trying to resolve issues with tool calling before putting more effort into this area.

My plan was to write an internal built in editor for the AI that could read and write to internal buffers to reduce context use, having the ability to load a file into a buffer then scan and modify, then compile and run, capturing output in another buffer. This would allow context pruning while maintaining AI understanding of the task.

I can get it to use the shell interface quite well, I have had it scan a directory and analyse the files to generate what it thinks the files do. I can get it to use wget to retrieve information (bitcoin price, exchange rates etc.)

But I just can't get it to use the editor interface it gets totally confused. There was an issue with Qwen models and tool calling when using the openai streaming interface but they say they have fixed that. But it doesn't seemed to have helped in my case. I have put the code to one side until I can resolve this issue as that is the crux of what I wanted to do.

« First   ‹ Prev
1 2