January 01, 2019
On Tue, Jan 01, 2019 at 11:57:26AM +0000, kinke via Digitalmars-d wrote:
> On Tuesday, 1 January 2019 at 09:44:40 UTC, Jacob Carlborg wrote:
> > On 2018-12-31 16:51, H. S. Teoh wrote:
> > > Note the order of magnitude difference in size, and that ldc2 achieves this by default, with no additional options needed.
> > > 
> > > How do you make dmd produce the same (or comparable) output?
> > 
> > For me I get comparable output with DMD and LDC
> 
> You guys are most likely comparing apples to oranges - H. S. using some distro-LDC preconfigured to link against shared druntime/Phobos, while LDC usually defaults to the static libs.

Aha!  I think you hit the nail on the head.  `ldd` revealed that the LDC binary has dynamic dependencies on libphobos2-ldc-shared.so and libdruntime-ldc-shared.so.  So that explains that.

So that's one less reason to ditch dmd... but still, I'm tempted, because of overall better codegen with ldc, esp. with compute-intensive code.


T

-- 
What are you when you run out of Monet? Baroque.
January 01, 2019
On Tuesday, January 1, 2019 9:09:29 PM MST H. S. Teoh via Digitalmars-d wrote:
> On Tue, Jan 01, 2019 at 07:04:24PM -0700, Jonathan M Davis via Digitalmars-d wrote: [...]
>
> > Anyone using a package manager to install dmd is only ever going to end up with the version of Phobos that goes with that dmd. It would be highly abnormal for multiple versions of Phobos to be installed on a system.
>
> Wait, what?  Where did you get that idea from?  Since about a half decade ago, Linux distros like Debian have had the possibility of multiple versions of the same shared library installed at the same time. It's pretty much impossible to manage a distro otherwise, since being unable to do so would mean that you cannot upgrade a shared library until ALL upstream code has been updated to use the new version.  Now granted, most of the time new library versions are ABI compatible with older versions, so you don't actually need to keep *every* version of the library around just because some executable somewhere needs them. And granted, splitting a library package into multiple simultaneous versions is only done when necessary.
>
> But the mechanisms for doing so have been in place since a long time ago, and any decent distro management would include setting up the proper mechanisms for multiple versions of Phobos to be installable simultaneously.  Otherwise you have the untenable situation that Phobos cannot be upgraded without breaking every D program currently installed on the system.  Of course, for this to work, the soname needs to be set properly and a sane versioning system (encoded in the soname) needs to be in place. Basically, *every* ABI incompatibility (and I do mean *every*, even those with no equivalent change in the source code) needs to be reflected by a soname change. Which is likely not being done with the current makefiles in git.  Which would explain your observations. But it is certainly *possible*, and often *necessary*, to install multiple versions of the same shared library simultaneously.

Sure, you could have separate packages for separate versions of Phobos, but from what I've seen, there's always one version of dmd with one version of Phobos, and distros don't provide multiple versions. Now, I haven't studied every distro there is, so maybe there is a distro out there that provides separate packages for old versions of Phobos, but in my experience, you're lucky if the distro has any package for dmd, let alone for it to be trying to provide a way to have multiple versions of Phobos installed.

And while yes, distros are set up in a way that you can have multiple versions of a library if multiple packages exist for them, and sometimes they do that, in the vast majority of cases, the solution is that all packages for the distro are built for a specific version of a library, and when that library is upgraded, all the packages that depend on it get rebuilt and need to be reinstalled. That's part of why it usually works so poorly to distribute closed source programs for Linux. Distros tend to be put together with the idea that all of the packages on the system are built for that system with whatever version of the libraries they're currently using, and having multiple versions of a library is the exception rather than the rule.

But regardless of what is typically done with libraries for stuff like C++ or python, unless distros are specifically providing packages for older versions of Phobos, then no one installing dmd and Phobos via a package manager is going to end up with multiple versions of Phobos installed. So, while it may be theoretically possible to have multiple versions of Phobos installed via a package manager, from what I've seen, that simply doesn't happen in practice, because the packages for dmd and Phobos aren't set up that way.

- Jonathan M Davis



January 02, 2019
On Wednesday, 2 January 2019 at 02:04:24 UTC, Jonathan M Davis wrote:
> I'm by no means against having shared libraries in general work, and I think that full dll support should exist for D on Windows, but aside from plugins, in the vast majority of cases, I think that using shared libraries is far more trouble than it's worth (especially with how difficult it is to maintain ABI compatibility with D libraries).

You've obviously haven't used shared libraries that much then -- that is shared libraries that link statically to the runtime. Having multiple instances of Phobos/druntime loaded at the same time in one process has it's own can of warms. I'm not surprised at all that in general people don't even use shared libraries.

>> > The whole nonsense where you have to rebuild your program, because _anything_ changed in the dll is just ridiculous IMHO.
>>
>> What?! Where did you hear this non-sense from? I'm not surprised at the state of shared library support on windows anymore.
>
>>From working with dlls with C++. With dlls on Windows, your program links
> against a static library associated with the dynamic library, and if any of the symbols are changed, the addresses change, and your program will be unable to load the newer version of the library without being rebuilt against the new version of the static library. This is in stark contrast to *nix where the linking works in such a way that as long as the symbols still exist with the same ABI in the newly built library, they're found when the program loads, and it's not a problem. The addresses aren't hard-coded in the way that happens with dlls on Windows. dlls on Windows allow you to share code so long as the programs are all built against exactly the same version of the dll (and if they're not, then you need separate copies of the dll, and you get into dll hell), whereas with *nix, you can keep updating the shared library as much as you like without changing the executable as long as the API and ABI of the existing symbols don't change.
>
> - Jonathan M Davis


On Wednesday, 2 January 2019 at 04:09:29 UTC, H. S. Teoh wrote:
> On Tue, Jan 01, 2019 at 07:04:24PM -0700, Jonathan M Davis via Digitalmars-d wrote: [...]
>> From working with dlls with C++. With dlls on Windows, your program links against a static library associated with the dynamic library, and if any of the symbols are changed, the addresses change, and your program will be unable to load the newer version of the library without being rebuilt against the new version of the static library.
>
> Wow. That makes me glad I'm not programming on Windows...

Mother of god, these two comments. Rip D on Windows, there's clearly only one competent Windows user on the D development team. It's too bad most of his work is focused on maintaining the VS plugin but from the looks of it if he didn't no one would. Everyone else is obliviously tucking incompetent.

If it worked the way you think it does (it doesn't) then every Windows update would literally break EVERY SINGLE executable file on the face of the earth. They would all need to be recompiled as the system DLL libraries are updated. You can only link to the system libraries dynamically virtually and every executable uses them. Yet somehow Windows is able to maintain backwards compatibility with old executable files better than Linux does. The more worrying thing about this is even though you said it yourself you barely use Windows yourself, rather than spending the 5 mins it would take for you to google search this yourself, you continue to go on with your misinformation that might have been true with Windows DOS 40 years ago. God damn.
January 02, 2019
On Wednesday, 2 January 2019 at 21:04:25 UTC, Rubn wrote:
> Mother of god, these two comments. Rip D on Windows, there's clearly only one competent Windows user on the D development team. It's too bad most of his work is focused on maintaining the VS plugin but from the looks of it if he didn't no one would.

Some of us would. (Manu most noticeably) VS is simply that good.

Alex
January 02, 2019
On Sat, Dec 29, 2018 at 8:00 AM Adam D. Ruppe via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> On Saturday, 29 December 2018 at 15:34:19 UTC, H. S. Teoh wrote:
> > Yeah no kidding, recently I rewrote a whole bunch of code to get *rid* of dependency on std.regex because it was too slow, and project compilation time improved from about 7+ seconds
>
> Ditto. (Basically). Rewriting the uri parser in cgi.d brought its build time from about 3 seconds down to 0.6. Which still feels slow, especially next to my minigui.d, which can do 0.3 since it broke off from Phobos entirely! (It was 2.5 seconds before).

I ran into this the other day: https://github.com/dlang/druntime/pull/2398#issuecomment-445690050

That is `max`, can you imagine a theoretically simpler function? Phobos is chaos!
January 03, 2019
On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:

> From working with dlls with C++. With dlls on Windows, your program
> links
> against a static library associated with the dynamic library, and if any
> of the symbols are changed, the addresses change, and your program will
> be unable to load the newer version of the library without being rebuilt
> against the new version of the static library.

That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime.

With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support).

Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it.

--Ryan
January 03, 2019
On Thursday, January 3, 2019 4:52:56 AM MST rjframe via Digitalmars-d wrote:
> On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:
> > From working with dlls with C++. With dlls on Windows, your program
> > links
> > against a static library associated with the dynamic library, and if any
> > of the symbols are changed, the addresses change, and your program will
> > be unable to load the newer version of the library without being rebuilt
> > against the new version of the static library.
>
> That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime.
>
> With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support).
>
> Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it.

*nix has the same distinction. It's a fundamentally different situation from linking your executable against the library. You're really dynamically loading rather than dynamically linking (though unfortunately, the terminology for the two is not particularly distinct, and they're often referred to the same way even though they're completely different). Loading libraries that way is what you do when you do stuff like plugins, because those aren't known when you build your program. But it makes a lot less sense as an alternative to linking your program against the library if you don't actually need to load the library like that. The COFF vs OMF mess on Windows makes it make slightly more sense on Windows (at least with D, where dmd uses OMF by default, unlike most of the C/C++ world at this point), because then it doesn't matter whether COFF or OMF was used (e.g. IIRC, Derelict is designed to be loaded that way for that reason), but in general, it's an unnecessarily complicated way to use a library. And if Windows' eccentricities make it more desirable than it is on *nix systems, then that's just yet another black mark against how Windows does dynamic linking IMHO.

- Jonathan M Davis



January 03, 2019
On Thu, Jan 3, 2019 at 5:50 AM Jonathan M Davis via Digitalmars-d <digitalmars-d@puremagic.com> wrote:
>
> On Thursday, January 3, 2019 4:52:56 AM MST rjframe via Digitalmars-d wrote:
> > On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:
> > > From working with dlls with C++. With dlls on Windows, your program
> > > links
> > > against a static library associated with the dynamic library, and if any
> > > of the symbols are changed, the addresses change, and your program will
> > > be unable to load the newer version of the library without being rebuilt
> > > against the new version of the static library.
> >
> > That's not necessarily true; Windows supports "implicit linking" and "explicit linking"; for implicit linking you do need to statically link against an import library, but for explicit linking you don't even need to know the DLL's name until runtime.
> >
> > With explicit linking you load the library by calling LoadLibrary/ LoadLibraryEx, then call GetProcAddress with the name of your desired function to get the function pointer. If you watch the filesystem for the DLL to change, you could live-update by reloading the DLL (which you typically wouldn't do outside debugging or maybe if offering plugin support).
> >
> > Most people just do implicit linking because it's less work. Any DLL can be loaded in both ways, though if there's a DllMain there may be problems if the library author doesn't support both methods; for implicit linking, DllMain is run before the program entry point, but for explicit linking its called by LoadLibrary in the context of the thread that calls it.
>
> *nix has the same distinction. It's a fundamentally different situation from linking your executable against the library. You're really dynamically loading rather than dynamically linking (though unfortunately, the terminology for the two is not particularly distinct, and they're often referred to the same way even though they're completely different). Loading libraries that way is what you do when you do stuff like plugins, because those aren't known when you build your program. But it makes a lot less sense as an alternative to linking your program against the library if you don't actually need to load the library like that. The COFF vs OMF mess on Windows makes it make slightly more sense on Windows (at least with D, where dmd uses OMF by default, unlike most of the C/C++ world at this point), because then it doesn't matter whether COFF or OMF was used (e.g. IIRC, Derelict is designed to be loaded that way for that reason), but in general, it's an unnecessarily complicated way to use a library. And if Windows' eccentricities make it more desirable than it is on *nix systems, then that's just yet another black mark against how Windows does dynamic linking IMHO.

Sorry, I don't think you know what you're talking about WRT Windows
DLL's and import libs.
Linking a Windows import lib is the same as `-lSharedLib.so`; it
links(/generates) a small stub at entry that loads the DLL, and
resolves the symbols in the import table to local function pointers.
You certainly do NOT need to rebuild your exe if the DLL is updated,
assuming no breaking changes to the ABI.
The import lib includes little stub's for the import functions that
call through the resolved pointer into the DLL. It's nothing more than
a convenience, and it's also possible to *generate* an import lib from
a .dll, which is effectively identical to linking against a .so.
January 03, 2019
On 01.01.19 07:21, Walter Bright wrote:
> On 12/31/2018 2:28 PM, Timon Gehr wrote:
>> Welcome to D, where 'enum' means 'const', 'const' means 'readonly', 'lazy' means 'by name', 'assert' means 'assume' and 'real' does not mean 'real' (in fact, I really like the 'ireal' and 'creal' keywords, pity they are being phased out). :)
> 
> D's "by name" are the template alias parameters.

I think alias parameters are not "by name" they are something like "by symbol". ("by name" is a bit confusing of a name, the argument expression need not really have a "name".)

The standard PL jargon is this:

https://en.wikipedia.org/wiki/Evaluation_strategy#Call_by_name
https://en.wikipedia.org/wiki/Lazy_evaluation
January 03, 2019
On Thursday, January 3, 2019 12:19:59 PM MST Manu via Digitalmars-d wrote:
> On Thu, Jan 3, 2019 at 5:50 AM Jonathan M Davis via Digitalmars-d
>
> <digitalmars-d@puremagic.com> wrote:
> > On Thursday, January 3, 2019 4:52:56 AM MST rjframe via Digitalmars-d
wrote:
> > > On Tue, 01 Jan 2019 19:04:24 -0700, Jonathan M Davis wrote:
> > > > From working with dlls with C++. With dlls on Windows, your program
> > > > links
> > > > against a static library associated with the dynamic library, and if
> > > > any
> > > > of the symbols are changed, the addresses change, and your program
> > > > will
> > > > be unable to load the newer version of the library without being
> > > > rebuilt
> > > > against the new version of the static library.
> > >
> > > That's not necessarily true; Windows supports "implicit linking" and
> > > "explicit linking"; for implicit linking you do need to statically
> > > link
> > > against an import library, but for explicit linking you don't even
> > > need to know the DLL's name until runtime.
> > >
> > > With explicit linking you load the library by calling LoadLibrary/
> > > LoadLibraryEx, then call GetProcAddress with the name of your desired
> > > function to get the function pointer. If you watch the filesystem for
> > > the
> > > DLL to change, you could live-update by reloading the DLL (which you
> > > typically wouldn't do outside debugging or maybe if offering plugin
> > > support).
> > >
> > > Most people just do implicit linking because it's less work. Any DLL
> > > can
> > > be loaded in both ways, though if there's a DllMain there may be
> > > problems
> > > if the library author doesn't support both methods; for implicit
> > > linking,
> > > DllMain is run before the program entry point, but for explicit
> > > linking
> > > its called by LoadLibrary in the context of the thread that calls it.
> >
> > *nix has the same distinction. It's a fundamentally different situation from linking your executable against the library. You're really dynamically loading rather than dynamically linking (though unfortunately, the terminology for the two is not particularly distinct, and they're often referred to the same way even though they're completely different). Loading libraries that way is what you do when you do stuff like plugins, because those aren't known when you build your program. But it makes a lot less sense as an alternative to linking your program against the library if you don't actually need to load the library like that. The COFF vs OMF mess on Windows makes it make slightly more sense on Windows (at least with D, where dmd uses OMF by default, unlike most of the C/C++ world at this point), because then it doesn't matter whether COFF or OMF was used (e.g. IIRC, Derelict is designed to be loaded that way for that reason), but in general, it's an unnecessarily complicated way to use a library. And if Windows' eccentricities make it more desirable than it is on *nix systems, then that's just yet another black mark against how Windows does dynamic linking IMHO.
>
> Sorry, I don't think you know what you're talking about WRT Windows
> DLL's and import libs.
> Linking a Windows import lib is the same as `-lSharedLib.so`; it
> links(/generates) a small stub at entry that loads the DLL, and
> resolves the symbols in the import table to local function pointers.
> You certainly do NOT need to rebuild your exe if the DLL is updated,
> assuming no breaking changes to the ABI.
> The import lib includes little stub's for the import functions that
> call through the resolved pointer into the DLL. It's nothing more than
> a convenience, and it's also possible to *generate* an import lib from
> a .dll, which is effectively identical to linking against a .so.

>From the last time I worked with Windows dlls, I remember quite distinctly
that doing anything like adding a symbol to the library meant that it was incompatible with executables previously built with it (which is not true for shared libraries on *nix - they only break if the ABI for the existing symbols changes). So, if that's not the case, I don't know what we were doing wrong, but I have an extremely sour taste in my mouth from dealing with Windows dlls. It was my experience that Linux stuff generally just worked, whereas we kept having to deal with junk on Windows to make them work (e.g. adding special attributes to functions just so that they would be exported), and I absolutely hated it. I have nothing good to say about dlls on Windows. It's quite possible that some of it isn't as bad if you know more about it than the team I was working on did, but it was one part of why making our libraries cross-platform was not at all fun.

- Jonathan M Davis