Jump to page: 1 2
Thread overview
powerline-d (I got an AI to port powerline-shell to D)
Sep 04
Sergey
Sep 04
jmh530
Sep 04
matheus
Sep 04
user1234
Sep 23
aberba
Sep 23
monkyyy
Sep 27
monkyyy
5 days ago
Kagamin
September 04

tl;dr: D port of powerline-shell, a beautiful command prompt with (among others) git status https://github.com/FeepingCreature/powerline-d

What's powerline-shell?

Has this happened to you? You're using git, and you enter some command only to get a weird error about branches. About fifteen minutes later, you finally realize you were in the middle of a rebase.

Wouldn't it be cool if your bash prompt automatically showed the status of the git repository in the current folder? Whether it's clean, whether you forgot to add any files, what branch you're on etc?

Enter powerline-shell. It's a Python tool that is executed on every prompt render and adds a bunch of information, such as host, git status, and a whole bunch of other configurable widgets. I consider it plainly indispensable for commandline git use.

Anyway, so a few days ago I noticed that powerline-shell was slow. Like, observable slowdown every time I hit return slow. It's kind of unavoidable, it's a Python project, it has to load the interpreter every time it starts.

Now there's a few rewrites, one in Rust, one in Go, but I mean - we're a D shop, and I'll be damned if I make critical workflow dependent on a Rust tool. But hey, the source isn't that big - only 2116 lines spread over 43 files. 61KB of source code. That comfortably fits in the context window of Claude 3.5 Sonnet. So I thought - what's to it? Let's just throw it in there and see how it handles things.

powerline-d

And what do you know, three hours and a long dialog later and seven bucks poorer, we have https://github.com/FeepingCreature/powerline-d . Anyone who says large language models aren't really intelligent now has to argue that programming doesn't require intelligence. At an estimate, this is 99.9% the AI's work, all I had to do was provide some design ideas and fix some syntax errors. And it definitely runs a lot faster than powerline-shell.

How much faster?

$ time powerline-shell
...
real    0m0,266s

$ time powerline-d --shell bash
...
real    0m0,072s

I'd say that's pretty meaningful. A lot of the speedup comes from me asking the AI to get rid of manual process calls to ps and git in favor of walking /proc or linking libgit2 directly, but even before that it was about twice as fast, just from skipping the Python interpreter setup/teardown.

I'm using it everywhere now instead of powerline-shell, and it works fine, which is to say indistinguishable from the original except the weird noticeable delay on hitting return is gone. I'd say it's a win.

A warning

I do have to say this. I've tested the parts of this that I personally use, but there were some problems in there. However, widgets like hg support are totally untested, because I don't personally need them. It's in there, and it builds, but I have no clue what'll happen if someone tries to use it. Bug reports welcome!

September 04

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

source code. That comfortably fits in the context window of Claude 3.5 Sonnet. So I thought - what's to it? Let's just throw it in there and see how it handles things.

So awesome! thanks for sharing not only the code but also you experience.
Recently I've asked on Discord who uses LLMs and and it turned out that most of the D community prefers to do development in the "old school" style..

Model provider Answers
Google 1
OpenAI 1
Anthropic 1
MS/GitHub 0
Local 0
X/Twitter 1
I'm not using LLMs 8
September 04

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

tl;dr: D port of powerline-shell, a beautiful command prompt with (among others) git status https://github.com/FeepingCreature/powerline-d

What's powerline-shell?

[snip]

Your experience might make for a good blog post.

September 04

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

Anyone who says large language models aren't really intelligent now has to argue that programming doesn't require intelligence.

In case that really needs some arguing, I would say translation is not a programming.

September 04
On Wednesday, 4 September 2024 at 17:02:55 UTC, Vladimir Marchevsky wrote:
> On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:
>> Anyone who says large language models aren't *really* intelligent now has to argue that programming doesn't require intelligence.
>
> In case that really needs some arguing, I would say translation is not a programming.

The other day I was watching a video from programmer (Jason Turner from C++) writing a raycasting[1], but for what it seems he didn't know the math behind the intersection between line-segments, so he decided to ask chatgpt to come with an answer based on attributes/properties he had already written and the AI generated a version based on this.

There was a problem with one sign in a expression and the raycasting was a mess, but on the other hand the programmer couldn't fix because he was just copying and pasting and de admittedly didn't know the math. He was only able to fix it when someone in chat pointed out the "sign problem".

I think the state of all that was sad, I mean people will not use their brain anymore? -  But on the other hand there is something going on since the AI was able to generate an algorithm based on specs given before hand in this specific language (Python), but I saw other videos with other languages too.

What I mean by all this, we are at the beginning of this trend but I can't imagine the outcome, I don't know for example if the case of this topic is a good or bad thing yet, but I keep wondering about what the new programmers coming in the future will face.

Finally I didn't want to derail the topic but the subject was already raised by the original poster,

[1] - (https://yewtu.be/watch?v=0lSqedQau6w) you can change yewtu.be for the google one with ads if you wish.

Matheus.
September 04

On Wednesday, 4 September 2024 at 17:02:55 UTC, Vladimir Marchevsky wrote:

>

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

Anyone who says large language models aren't really intelligent now has to argue that programming doesn't require intelligence.

In case that really needs some arguing, I would say translation is not a programming.

There are semantical differences between two languages. Things like implicit conversions, integral promotion, order of evaluation, etc. still need a good old human behing the screen.

September 05

On Wednesday, 4 September 2024 at 18:55:07 UTC, user1234 wrote:

>

On Wednesday, 4 September 2024 at 17:02:55 UTC, Vladimir Marchevsky wrote:

>

In case that really needs some arguing, I would say translation is not a programming.

There are semantical differences between two languages. Things like implicit conversions, integral promotion, order of evaluation, etc. still need a good old human behing the screen.

Yeah, the nice thing about translation is that the things that LLMs have the most trouble with, ie. high-level semantic design, class naming, abstraction, are already handled. But it's also not like the LLM isn't making its own choices. Recently, I got a LLM to automatically translate a program from asyncio-mqtt to paho-mqtt.

That's not a syntax rewrite, it's a full-on migration, async to threads. It still handled it fine. The boundary at the top of abstraction work in programming where LLMs have trouble is really not all that big.

The problem is that because current AIs are congenitally incapable of backspacing, high-level abstraction basically requires the AI to guess what it's gonna need from zero, and be right, because it can't change course. Defining an abstract class requires guessing what methods that class needs, because you can't backtrack. It isn't - can't be - good at that, and because of that, it isn't really trained in it either. But you can work around that with careful prompting and some iteration.

For instance, you can see that we turned the Python segment classes into functions. That's a thing that I suggested, but my effort literally amounted to saying:

>

Honestly the more I think about it, the more I feel the segments should really just be functions, since they don't interact.

And later:

>

Can you please refactor the segment logic so that the functions always return empty arrays for "don't include segments" and we never look at content.empty? Also remove unneeded parameters while we're at it.

The LLM works a lot better when it's going "from file to file" making small to moderate incremental changes. (Like us, really.)

September 23

On Wednesday, 4 September 2024 at 17:02:55 UTC, Vladimir Marchevsky wrote:

>

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

Anyone who says large language models aren't really intelligent now has to argue that programming doesn't require intelligence.

In case that really needs some arguing, I would say translation is not a programming.

You would be surprised how much original code and code modifications LLMs can output. I wouldn't be to quick to dismiss them as mere translation tools.

For example, take a look at the intro video on the Zed homepage to see what can be achieved with AI assisted coding (https://zed.dev/)

September 23

On Monday, 23 September 2024 at 08:46:30 UTC, aberba wrote:

>

For example, take a look at the intro video on the Zed homepage to see what can be achieved with AI assisted coding (https://zed.dev/)

https://www.youtube.com/watch?v=-ONQvxqLXqE

Run real tests, most ai examples are faked

September 24

On Wednesday, 4 September 2024 at 12:24:07 UTC, FeepingCreature wrote:

>

tl;dr: D port of powerline-shell, a beautiful command prompt with (among others) git status https://github.com/FeepingCreature/powerline-d

[...]

Nice, did you use ImportC also?

« First   ‹ Prev
1 2