May 30, 2015
On 30 May 2015 at 20:38, Shachar Shemesh via Digitalmars-d < digitalmars-d@puremagic.com> wrote:

> On 30/05/15 03:57, Steven Schveighoffer wrote:
>
>  I saw the slide from Liran that shows your compiler requirements :) I
>> can see why it's important to you.
>>
>
> Then you misunderstood Liran's slides.
>
> Our compile resources problem isn't with GDC. It's with DMD. Single object compilation requires more RAM than most developers machines have, resulting in a complicated "rsync to AWS, run script there, compile, fetch results" cycle that adds quite a while to the compilation time.
>
> Conversly, our problem with GDC is that IT !@$#%&?!@# PRODUCES ASSEMBLY THAT DOES NOT MATCH THE SOURCE.
>
>
Got any bug reports to back that up?  I should probably run the testsuite with optimisations turned on sometime.


May 30, 2015
On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
>>
>>
>> When he says Windows, he means MSVC, gcc backend will never support interfacing that ABI (at least I see no motivation as of writing).
>>
> I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?

If your program is isolated, MinGW is fine. Great even!
But the Windows ecosystem is built around Microsoft's COFF formatted
libraries (as produced by Visual Studio), and most Windows libs that I
find myself working with are closed-source, or distributed as
pre-built binaries.
You can't do very large scale work in the Windows ecosystem without
interacting with the MS ecosystem, that is, COFF libs, and CV8/PDB
debuginfo.

Even if we could use MinGW, we ship an SDK ourselves, and customers
would demand COFF libs from us.
LLVM is (finally!) addressing this Microsoft/VisualC-centric nature of
the Windows dev environment... I just wish they'd hurry up! It's about
10 years overdue.
May 31, 2015
On 31/05/15 02:08, Manu via Digitalmars-d wrote:
> On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d
> <digitalmars-d@puremagic.com> wrote:
>> On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
>>>
>>>
>>> When he says Windows, he means MSVC, gcc backend will never support
>>> interfacing that ABI (at least I see no motivation as of writing).
>>>
>> I thought that's what MINGW was. A gcc backend that interfaces with the
>> Windows ABI. Isn't it?
>
> If your program is isolated, MinGW is fine. Great even!
> But the Windows ecosystem is built around Microsoft's COFF formatted
> libraries (as produced by Visual Studio), and most Windows libs that I
> find myself working with are closed-source, or distributed as
> pre-built binaries.
Again, sorry for my ignorance. I just always assumed that the main difference between mingw and cygwin is precisely that: that mingw executables are PE formatted, and can import PE DLLs (such as the Win32 DLLs themselves).

If that is not the case, what is the mingw format? How does it allow you to link in the Win32 DLLs if it does not support COFF?

Shachar
May 31, 2015
On 31 May 2015 at 17:59, Shachar Shemesh via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 31/05/15 02:08, Manu via Digitalmars-d wrote:
>>
>> On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>>>
>>> On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
>>>>
>>>>
>>>>
>>>> When he says Windows, he means MSVC, gcc backend will never support interfacing that ABI (at least I see no motivation as of writing).
>>>>
>>> I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?
>>
>>
>> If your program is isolated, MinGW is fine. Great even!
>> But the Windows ecosystem is built around Microsoft's COFF formatted
>> libraries (as produced by Visual Studio), and most Windows libs that I
>> find myself working with are closed-source, or distributed as
>> pre-built binaries.
>
> Again, sorry for my ignorance. I just always assumed that the main difference between mingw and cygwin is precisely that: that mingw executables are PE formatted, and can import PE DLLs (such as the Win32 DLLs themselves).
>
> If that is not the case, what is the mingw format? How does it allow you to link in the Win32 DLLs if it does not support COFF?
>
> Shachar

I did once play with a coff mingw build, but I think the key issue I
had there was the C runtime. GCC built code seems to produce intrinsic
calls to glibc, and it is incompatible with MSVCRT.
I'm pretty certain that GCC can't emit code to match the Win32
exception model, and there's still the debuginfo data to worry about
too.
May 31, 2015
On 31 May 2015 at 10:45, Manu via Digitalmars-d <digitalmars-d@puremagic.com
> wrote:

> On 31 May 2015 at 17:59, Shachar Shemesh via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> > On 31/05/15 02:08, Manu via Digitalmars-d wrote:
> >>
> >> On 31 May 2015 at 04:39, Shachar Shemesh via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> >>>
> >>> On 30/05/15 11:00, Iain Buclaw via Digitalmars-d wrote:
> >>>>
> >>>>
> >>>>
> >>>> When he says Windows, he means MSVC, gcc backend will never support interfacing that ABI (at least I see no motivation as of writing).
> >>>>
> >>> I thought that's what MINGW was. A gcc backend that interfaces with the Windows ABI. Isn't it?
> >>
> >>
> >> If your program is isolated, MinGW is fine. Great even!
> >> But the Windows ecosystem is built around Microsoft's COFF formatted
> >> libraries (as produced by Visual Studio), and most Windows libs that I
> >> find myself working with are closed-source, or distributed as
> >> pre-built binaries.
> >
> > Again, sorry for my ignorance. I just always assumed that the main difference between mingw and cygwin is precisely that: that mingw executables are PE formatted, and can import PE DLLs (such as the Win32
> DLLs
> > themselves).
> >
> > If that is not the case, what is the mingw format? How does it allow you
> to
> > link in the Win32 DLLs if it does not support COFF?
> >
> > Shachar
>
> I did once play with a coff mingw build, but I think the key issue I
> had there was the C runtime. GCC built code seems to produce intrinsic
> calls to glibc, and it is incompatible with MSVCRT.
> I'm pretty certain that GCC can't emit code to match the Win32
> exception model, and there's still the debuginfo data to worry about
> too.
>

Pretty much correct as far as I understand it.

- GCC uses DWARF to embed debug information into the program, rather that
store it in a separate PDB.
- GCC uses SJLJ exceptions in C++ that work to it's own libunwind model.
- GCC uses Itanium C++ mangling, so mixed MSVC/G++ is a no-go.
- GCC uses cdecl as the default calling convention (need to double check
this is correct though).

That said, GCC does produce a COFF binary that is understood by the Windows platform (otherwise you wouldn't be able to run programs).  But interacting with Windows libraries is restricted to the lowest API, that being anything that was marked with stdcall, fastcall or cdecl.

MinGW is an entriely isolated runtime environment that fills the missing/incompatible gaps between Windows and GNU/Posix runtime to allows GCC built programs to run.


June 01, 2015
On 30/05/15 21:44, Iain Buclaw via Digitalmars-d wrote:

> Got any bug reports to back that up?  I should probably run the
> testsuite with optimisations turned on sometime.
>
>

The latest one (the one that stung my code) is http://bugzilla.gdcproject.org/show_bug.cgi?id=188. In general, the bugs opened by Liran are usually around that area, as he's the one who does the porting of our code to GDC.

Shachar
June 01, 2015
On 1 Jun 2015 09:25, "Shachar Shemesh via Digitalmars-d" < digitalmars-d@puremagic.com> wrote:
>
> On 30/05/15 21:44, Iain Buclaw via Digitalmars-d wrote:
>
>> Got any bug reports to back that up?  I should probably run the testsuite with optimisations turned on sometime.
>>
>>
>
> The latest one (the one that stung my code) is
http://bugzilla.gdcproject.org/show_bug.cgi?id=188. In general, the bugs opened by Liran are usually around that area, as he's the one who does the porting of our code to GDC.
>
> Shachar

OK thanks, I'll try to mentally couple you two together.   I'm aware of the bugs Liran has made.  There's just some 'very big things' going on which has me away from bug fixing currently.


June 01, 2015
On 5/30/15 2:38 PM, Shachar Shemesh wrote:
> On 30/05/15 03:57, Steven Schveighoffer wrote:

>> But I don't see how speed of compiler should sacrifice runtime
>> performance.
> Our plan was to compile with DMD during the development stage, and then
> switch to GDC for code intended for deployment. This plan simply cannot
> work if each time we try and make that switch, Liran has to spend two
> months, each time yanking a different developer from the work said
> developer needs to be doing, in order to figure out which line of source
> gets compiled incorrectly.

You're answering a question that was not asked. Obviously, compiler-generated code should match what the source says. That's way more important than speed of compilation or speed of execution.

So given that a compiler actually *works* (i.e. produces valid binaries), is speed of compilation better than speed of execution of the resulting binary? How much is too much? And there are thresholds for things that really make the difference between works and not works. For instance, a requirement for 30GB of memory is not feasible for most systems. If you have to have 30GB of memory to compile, then the effective result is that compiler doesn't work. Similarly, if a compiler takes 2 weeks to output a binary, even if it's the fastest binary on the planet, that compiler doesn't work.

But if we are talking the difference between a compiler taking 10 minutes to produce a binary that is 20% faster than a compiler that takes 1 minute, what is the threshold of pain you are willing to accept? My preference is for the 10 minute compile time to get the fastest binary. If it's possible to switch the compiler into "fast mode" that gives me a slower binary, I might use that for development.

My original statement was obviously exaggerated, I would not put up with days-long compile times, I'd find another way to do development. But compile time is not as important to me as it is to others.

-Steve
June 02, 2015
On 01/06/15 18:40, Steven Schveighoffer wrote:
> On 5/30/15 2:38 PM, Shachar Shemesh wrote:
>
> So given that a compiler actually *works* (i.e. produces valid
> binaries), is speed of compilation better than speed of execution of the
> resulting binary?
There is no answer to that question.

During development stage, there are many steps that have "compile" as a hard start/end barrier (i.e. - you have to finish a task before compile start, and cannot continue it until compile ends). During those stages, the difference between 1 and 10 minute compile is the difference between 1 and 10 bugs solved in a day. It is a huge difference, and one it is worth sacrificing any amount of run time efficiency to pay, assuming this is a tradeoff you can later make.

Then again, when a release build is being prepared, the difference becomes moot. Even your "outrageous" figures become acceptable, so long as you can be sure that no bugs pop up in this build that did not exist in the non-optimized build.

Then again, please bear in mind that our product is somewhat atypical. Most actual products in the market are not CPU bound on algorithmic code. When that's the case, the optimization stage (beyond the most basic inlining stuff) will rarely give you 20% overall speed increase. When your code performs a system call every 40 assembly instructions, there simply isn't enough room for the optimizer to work its magic.

One exception to that above rule is where it hurts. Benchmarks, typically, do rely on algorithmic code to a large extent.

Shachar
June 02, 2015
On 2 June 2015 at 19:42, Shachar Shemesh via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
> On 01/06/15 18:40, Steven Schveighoffer wrote:
>>
>> On 5/30/15 2:38 PM, Shachar Shemesh wrote:
>>
>> So given that a compiler actually *works* (i.e. produces valid binaries), is speed of compilation better than speed of execution of the resulting binary?
>
> There is no answer to that question.
>
> During development stage, there are many steps that have "compile" as a hard start/end barrier (i.e. - you have to finish a task before compile start, and cannot continue it until compile ends). During those stages, the difference between 1 and 10 minute compile is the difference between 1 and 10 bugs solved in a day. It is a huge difference, and one it is worth sacrificing any amount of run time efficiency to pay, assuming this is a tradeoff you can later make.
>
> Then again, when a release build is being prepared, the difference becomes moot. Even your "outrageous" figures become acceptable, so long as you can be sure that no bugs pop up in this build that did not exist in the non-optimized build.
>
> Then again, please bear in mind that our product is somewhat atypical. Most actual products in the market are not CPU bound on algorithmic code. When that's the case, the optimization stage (beyond the most basic inlining stuff) will rarely give you 20% overall speed increase. When your code performs a system call every 40 assembly instructions, there simply isn't enough room for the optimizer to work its magic.
>
> One exception to that above rule is where it hurts. Benchmarks, typically, do rely on algorithmic code to a large extent.
>
> Shachar

Quality of optimisation is also proportional to battery consumption. Even if the performance increase for a user isn't significant to them in terms of responsiveness, it has an effect on their battery life, which they do appreciate, even if they are unaware of it.