January 26, 2016
On Tuesday, 26 January 2016 at 20:38:16 UTC, tsbockman wrote:
> It's not like you could just reallocate all the effort that goes into Phobos towards the compiler and stuff.

My impression is that the majority of the contributors to Phobos are capable D programmers.

DMD is implemented in D now, no longer C++, so with refactoring and documentation I think a lot more programmers are qualified to hack on the compiler.


January 26, 2016
On Tuesday, 26 January 2016 at 20:43:29 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 26 January 2016 at 20:38:16 UTC, tsbockman wrote:
>> It's not like you could just reallocate all the effort that goes into Phobos towards the compiler and stuff.
>
> My impression is that the majority of the contributors to Phobos are capable D programmers.
>
> DMD is implemented in D now, no longer C++, so with refactoring and documentation I think a lot more programmers are qualified to hack on the compiler.

The language in which DMD is written has never been the main barrier to working on it. The real issue is that DMD is a huge, poorly documented code base in which most of the various components are tightly coupled to everything else. This makes it both intimidating and time consuming to get started hacking on it.

In contrast, Phobos (although still very large) is fairly well documented, and has much less coupling between most of its components. Moreover, what coupling there is, is easily understood because the average (experienced) D programmer already knows his way around the standard library, from the outside.

Also, you skipped past the "uninterested" part - this is a volunteer project, remember?
January 26, 2016
On Tuesday, 26 January 2016 at 21:03:01 UTC, tsbockman wrote:
> Also, you skipped past the "uninterested" part - this is a volunteer project, remember?

I didn't think it was a relevant argument as you can still write libraries for distribution. Keep in mind that the standard library has to be maintained and API's cannot easily be redesigned because of backwards compatibility.

Even if C/C++ have small standard libraries they provide a depressing amount of low quality functionality that one should avoid. But it is kept in for backwards compatibility and sometimes even updated and extended...

That not a good thing.

January 26, 2016
On Tuesday, 26 January 2016 at 21:47:41 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 26 January 2016 at 21:03:01 UTC, tsbockman wrote:
>> Also, you skipped past the "uninterested" part - this is a volunteer project, remember?
>
> I didn't think it was a relevant argument as you can still write libraries for distribution. Keep in mind that the standard library has to be maintained and API's cannot easily be redesigned because of backwards compatibility.
>
> Even if C/C++ have small standard libraries they provide a depressing amount of low quality functionality that one should avoid. But it is kept in for backwards compatibility and sometimes even updated and extended...
>
> That not a good thing.

There are certainly disadvantages to the standard library model of distribution and maintenance.

On the other hand:

1) The prospect of getting something into the standard library is a huge motivator for (at least some) potential contributors.

Why? Because building a library that no one knows about/trusts is wasted effort. Getting something into `std` is among the most effective forms of marketing available, and requires little non-programming-related skill or effort on the part of the contributor.

2) Standard libraries don't enforce backwards compatibility (and high code quality in general) just for the sake of bureaucracy - they do so because these are highly desirable characteristics for basic infrastructure. People shouldn't have to rewrite their entire stack every 6 months just because someone thought of a better API for some low-level component.

Making it through D's formal review process typically raises code quality quite a lot, and the knowledge that backwards compatibility is a high priority makes outsiders much more likely to invest in actually using a library module.

In short, while you make some valid points, your analysis seems very lopsided; it completely glosses over all of the positives associated with the status quo.
January 26, 2016
On Tuesday, 26 January 2016 at 22:33:32 UTC, tsbockman wrote:
> 1) The prospect of getting something into the standard library is a huge motivator for (at least some) potential contributors.

I am not sure if that is the right motivation. Sounds like recipe for bloat. Good libraries evolve from being used in real applications. Many applications.

> characteristics for basic infrastructure. People shouldn't have to rewrite their entire stack every 6 months just because someone thought of a better API for some low-level component.

Then don't use libraries from unreliable teams.

> Making it through D's formal review process typically raises code quality quite a lot, and the knowledge that backwards compatibility is a high priority makes outsiders much more likely to invest in actually using a library module.

Code quality is one thing, but if it has not been used in many applications, how can you then know if the abstraction is particularly useful?

There is nothing wrong with having a set of recommended libraries, e.g. a DSP library with FFT. But having things like FFT in the standard library is just crap. Even Apple does not do that, they have a separate library called Accelerate for such things. There is no way you can have the same interface for FFT across platforms. The structure of the data is different, the accuracy is different, all for max performance.

In general the standard library should just be the most basic things, even file system support is tricky for a system level programming language. For instance, on some cloud platforms you don't get to read/write parts of a file. You do it as one big atomic write/read.

January 26, 2016
On Tuesday, 26 January 2016 at 22:48:23 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 26 January 2016 at 22:33:32 UTC, tsbockman wrote:
>> characteristics for basic infrastructure. People shouldn't have to rewrite their entire stack every 6 months just because someone thought of a better API for some low-level component.
>
> Then don't use libraries from unreliable teams.

Just using the standard library whenever possible is a simple and efficient way of accomplishing this - if the standard library actually has anything in it...

>> Making it through D's formal review process typically raises code quality quite a lot, and the knowledge that backwards compatibility is a high priority makes outsiders much more likely to invest in actually using a library module.
>
> Code quality is one thing, but if it has not been used in many applications, how can you then know if the abstraction is particularly useful?

This is why requiring modules to spend some time on DUB and/or in `std.experimental` before freezing the API is important.

> In general the standard library should just be the most basic things, even file system support is tricky for a system level programming language. For instance, on some cloud platforms you don't get to read/write parts of a file. You do it as one big atomic write/read.

Targeting 100% generality with APIs is pretty much always a bad idea.

Standard libary modules should endeavor to meet the needs of at least, say, 80% of potential users; they're not supposed to completely eliminate the need for specialized third-party libraries entirely. This is OK, because if you don't use a module, the compiler won't include it in the final executable.
January 26, 2016
On Tuesday, 26 January 2016 at 23:04:57 UTC, tsbockman wrote:
> This is why requiring modules to spend some time on DUB and/or in `std.experimental` before freezing the API is important.

Well, there aren't enough D applications written to ensure the usefulness of the API. Just take a look at widely adopted frameworks, they are "the one true thing" for a few years with 100s or 1000s of applications using them. However as applications grow problems emerge. What happens? The entire framework is scrapped and replaced with something else.

> Targeting 100% generality with APIs is pretty much always a bad idea.

Making a system level programming library based on what a current PC operating systems offers is also a bad idea. On Apple systems you are supposed to no longer use paths, but switch to URLs for files. Ok, you can do that by requiring URLs on all platforms, but what if you designed the API 10 years ago?

Operating systems changes, hardware changes. System level programming languages shouldn't provide an abstract machine, unlike the JVM.

I'm not at all convinced that D isn't tied too heavily to x86/POSIX/Windows. It isn't obvious that future systems will bother with POSIX.


January 26, 2016
Or let me put it this way. If the standard library requires POSIX, then it isn't really a standard library, but a POSIX abstraction library...
January 27, 2016
On Tuesday, 26 January 2016 at 22:48:23 UTC, Ola Fosheim Grøstad wrote:
> On Tuesday, 26 January 2016 at 22:33:32 UTC, tsbockman wrote:
>> 1) The prospect of getting something into the standard library is a huge motivator for (at least some) potential contributors.
>
> I am not sure if that is the right motivation. Sounds like recipe for bloat. Good libraries evolve from being used in real applications. Many applications.

sayeth a low-level guy (if I understand correctly), which will certainly create a distinct perspective about what you would like to see in the standard library, and yet this may not be the right thing for the language as a whole.

fwiw, people that do use D on a serious scale have remarked that the richness of the standard library (even as it stands today) was a major advantage - in bioinformatics, at a London hedge fund, and I think AdRoll.
January 27, 2016
On Wednesday, 27 January 2016 at 06:17:44 UTC, Laeeth Isharc wrote:
> On Tuesday, 26 January 2016 at 22:48:23 UTC, Ola Fosheim Grøstad wrote:

>> I am not sure if that is the right motivation. Sounds like recipe for bloat. Good libraries evolve from being used in real applications. Many applications.
>
> sayeth a low-level guy (if I understand correctly), which will certainly create a distinct perspective about what you would like to see in the standard library, and yet this may not be the right thing for the language as a whole.

I am both low-level and high level, but D's primary advantage is that it allows low level programming.

> fwiw, people that do use D on a serious scale have remarked that the richness of the standard library (even as it stands today) was a major advantage - in bioinformatics, at a London hedge fund, and I think AdRoll.

Do you consider Angular to be low level? It was used in 100s if not 1000s of applications, but was considered inadequate and scrapped in favour of Angular2. This is the typical pattern for libraries and frameworks.