February 12, 2022
On Saturday, 12 February 2022 at 15:54:29 UTC, Paulo Pinto wrote:
>
> They went on to create Eiffel, Delphi, .NET, Java, V8, GraalVM (nee Maxime), OCaml, Go and Dart.

Half you mentioned are interpreters. Bytecode, JITs, saying however you want, not true compilers/transpilers that result to binary. Dart doesn't compile fast (or why I have this idea?) and GO is very fast but isn't extremely fast (and was very annoying and limited language the last time I checked).

Not to offend anything and anyone here but these examples don't do for me and probably for most people here.
February 12, 2022
On Saturday, 12 February 2022 at 19:06:55 UTC, rempas wrote:
> On Saturday, 12 February 2022 at 15:54:29 UTC, Paulo Pinto wrote:
>>
>> They went on to create Eiffel, Delphi, .NET, Java, V8, GraalVM (nee Maxime), OCaml, Go and Dart.
>
> Half you mentioned are interpreters. Bytecode, JITs, saying however you want, not true compilers/transpilers that result to binary. Dart doesn't compile fast (or why I have this idea?) and GO is very fast but isn't extremely fast (and was very annoying and limited language the last time I checked).
>
> Not to offend anything and anyone here but these examples don't do for me and probably for most people here.

That only shows how little you know of them, and the available toolchains.

If you want to actually educate yourself about them, there is plenty of material available.
February 12, 2022
On Saturday, 12 February 2022 at 16:13:55 UTC, H. S. Teoh wrote:
>
> I use dmd for the code-compile-test cycle because of the fast turnaround. For small programs dmd is so fast it's almost like programming in a scripting language(!). For larger programs it's less so, but still impressively fast for compile times.
>
> Runtime performance of executables compiled by dmd, however, is a disappointment.  I consistently get 20%-40% runtime performance improvement by compiling with ldc/gdc, esp. for CPU-intensive programs.
>
> So my usual workflow is dmd for code-compile-test, ldc -O2 for release builds.
>
>
> T

If you get to a point that runtime becomes too slow for a specific task then I don't think that 20%-40% will make such of a big difference really. There may be cases that even the smallest performance boost will make the difference but were that a lot in your experience?

The funny stuff is that I may be stupid and talking about things I don't have experience with but I'm just talking with logic in mind so If I'm wrong then please make sure to properly fix me and tell me your experience on this topic. Thank you!
February 12, 2022
On Saturday, 12 February 2022 at 19:18:12 UTC, Paulo Pinto wrote:
>
> That only shows how little you know of them, and the available toolchains.
>
> If you want to actually educate yourself about them, there is plenty of material available.

You are true on what you are saying, but what made you say that from my comment? I suppose that I was wrong about Dart and Go have probably got better. But other than that what was my mistake? I'm not saying that to make irony, I really want to see your point of view. You understand that I can learn about 2-3 languages but I cannot make a research about every language you listed. Thank you!
February 12, 2022
On Sat, Feb 12, 2022 at 07:31:28PM +0000, rempas via Digitalmars-d wrote: [...]
> If you get to a point that runtime becomes too slow for a specific task then I don't think that 20%-40% will make such of a big difference really. There may be cases that even the smallest performance boost will make the difference but were that a lot in your experience?
[...]

20%-40% is a HUGE difference. Think about a 60fps 3D game where you have only 16ms to update the screen for the next frame. If your code takes ~13ms to update a frame when compiled with LDC -O2, then compiling D will not even be an option because it would not be able to meet the framerate and the game will be jerky and unplayable.  If the difference is 2% or 3% then there may still be room for negotiation. 20%-40% is half an order of magnitude. There is no way you can compromise with that.

Also, for long-running CPU-intensive computations, which one would you rather have: your complex computation to finish in 2 days, which may just make the deadline, or ~4 days, which will definitely *not* meet the deadline?  Again, if the difference is 2% or 3% then you may still be able to work with it. 20%-40% is unacceptable.


T

-- 
Written on the window of a clothing store: No shirt, no shoes, no service.
February 12, 2022
On Saturday, 12 February 2022 at 20:22:44 UTC, H. S. Teoh wrote:
>
> 20%-40% is a HUGE difference. Think about a 60fps 3D game where you have only 16ms to update the screen for the next frame. If your code takes ~13ms to update a frame when compiled with LDC -O2, then compiling D will not even be an option because it would not be able to meet the framerate and the game will be jerky and unplayable.  If the difference is 2% or 3% then there may still be room for negotiation. 20%-40% is half an order of magnitude. There is no way you can compromise with that.
>
> Also, for long-running CPU-intensive computations, which one would you rather have: your complex computation to finish in 2 days, which may just make the deadline, or ~4 days, which will definitely *not* meet the deadline?  Again, if the difference is 2% or 3% then you may still be able to work with it. 20%-40% is unacceptable.
>
>
> T

Game dev was what I was sure about and the first thing that comes in mind when we talk about runtime performance. The second example was a good one too! Thank you!
February 12, 2022

On Saturday, 12 February 2022 at 20:22:44 UTC, H. S. Teoh wrote:

>

On Sat, Feb 12, 2022 at 07:31:28PM +0000, rempas via Digitalmars-d wrote: [...]

>

If you get to a point that runtime becomes too slow for a specific task then I don't think that 20%-40% will make such of a big difference really. There may be cases that even the smallest performance boost will make the difference but were that a lot in your experience?
[...]

20%-40% is a HUGE difference. Think about a 60fps 3D game where you have only 16ms to update the screen for the next frame. If your code takes ~13ms to update a frame when compiled with LDC -O2, then compiling D will not even be an option because it would not be able to meet the framerate and the game will be jerky and unplayable. If the difference is 2% or 3% then there may still be room for negotiation. 20%-40% is half an order of magnitude. There is no way you can compromise with that.

Also, for long-running CPU-intensive computations, which one would you rather have: your complex computation to finish in 2 days, which may just make the deadline, or ~4 days, which will definitely not meet the deadline? Again, if the difference is 2% or 3% then you may still be able to work with it. 20%-40% is unacceptable.

T
The thing with dmd isn't just the performance that also it's quite buggy when it starts optimizing.

Quite a few libraries have a gotcha due to dmd (especially -inline) that has to be worked around (the inliner can basically ignore language semantics which can break NRVO for example)

February 12, 2022
On 2/12/2022 2:00 AM, John Colvin wrote:
> I absolutely don’t want my executable defined by the order things happen to appear on the linker command line. I don’t want that incidentally and I don’t want to do it deliberately. The boat sailed on this long ago, I just want everything to be in the executable please with errors on duplicates, unless it’s dead code.

For better or worse, that's how linkers work.

Though you could write a tool to scan libraries for multiple definitions. Most of the work is already done for you in dmd's source code.
February 12, 2022
On 2/12/2022 9:20 AM, max haughton wrote:
> I'm specifically talking about the file that handles elf files, it's very messy and uses some absolutely enormous structs which are naturally very slow by virtue of their size.

The elf generator was written nearly 30 years ago, and has never been refactored properly to modernize it. It could sure use it, but I'm not so sure it would speed things up noticeably.

If you want to take a crack at it, feel free!
February 13, 2022
On Sunday, 13 February 2022 at 00:41:38 UTC, Walter Bright wrote:
> On 2/12/2022 9:20 AM, max haughton wrote:
>> I'm specifically talking about the file that handles elf files, it's very messy and uses some absolutely enormous structs which are naturally very slow by virtue of their size.
>
> The elf generator was written nearly 30 years ago, and has never been refactored properly to modernize it. It could sure use it, but I'm not so sure it would speed things up noticeably.
>
> If you want to take a crack at it, feel free!

It's on my list.

The reason why it's slow is because the structs are very large compared to a cacheline so the CPU has to pull in (optimistically, the CPU might pull in several lines at once) 64 bytes but only uses about 10 of them in a given iteration.

There is an O(n^2) algorithm in there but I'm not sure it's a particularly big N in normal programs.
1 2 3 4 5 6 7 8
Next ›   Last »