February 19, 2017
On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
> 4. I have heard good reports of D's metaprogramming capabilities (ironically enough, primarily from a thread on the Rust user group), and coming from a Common Lisp (and some Racket) background, I am deeply interested in this aspect. Are D macros as powerful as Lisp macros? Are they semantically similar (for instance, I found Rust's macros are quite similar to Racket's)?

I was a Scheme/Common Lisp user for quite a while before moving to D. Lisp macros are more powerful (there's not much you can't do with them), but on the other hand, unless you stick with simple use cases, it can be hard to get Lisp macros right. You've got the whole defmacro vs hygienic debate. Also, the rule of thumb is to avoid macros unless you can't do it with a function.

I was never into heavy metaprogramming with Scheme or Common Lisp. The simplicity of D's compile time capabilities mean I do more metaprogramming in D, and I actually push a lot of stuff from runtime to compile time, which I wouldn't have done with Common Lisp.
February 20, 2017
On Sunday, 19 February 2017 at 15:22:50 UTC, bachmeier wrote:
> On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
>> 4. I have heard good reports of D's metaprogramming capabilities (ironically enough, primarily from a thread on the Rust user group), and coming from a Common Lisp (and some Racket) background, I am deeply interested in this aspect. Are D macros as powerful as Lisp macros? Are they semantically similar (for instance, I found Rust's macros are quite similar to Racket's)?
>
> I was a Scheme/Common Lisp user for quite a while before moving to D. Lisp macros are more powerful (there's not much you can't do with them), but on the other hand, unless you stick with simple use cases, it can be hard to get Lisp macros right. You've got the whole defmacro vs hygienic debate. Also, the rule of thumb is to avoid macros unless you can't do it with a function.

Agreed.

> I was never into heavy metaprogramming with Scheme or Common Lisp. The simplicity of D's compile time capabilities mean I do more metaprogramming in D, and I actually push a lot of stuff from runtime to compile time, which I wouldn't have done with Common Lisp.

That's very encouraging to hear! :-)

February 20, 2017
On Sunday, 19 February 2017 at 12:40:10 UTC, Guillaume Piolat wrote:
> On Saturday, 18 February 2017 at 20:15:55 UTC, timmyjose wrote:
>> My rudimentary knowledge of the D ecosystem tells me that there is a GC in D, but that can be turned off. Is this correct? Also, some threads online mention that if we do turn off GC, some of the core std libraries may not fully work. Is this presumption also correct?
>
> The topic is complex, there are a lot of mitigation techniques.
>
> A - for most real-time programs, you may want to keep the GC heap under 200kb. A combination of GC profiling, using values types, and manual memory management can get you there. @nogc also helps.
>
> B - some real-time threads don't like to be paused (audio). You can unregister them from the runtime which means the GC won't stop them on collection. On the other hand this thread won't be able to "own" collectable things.
>
> C - finally you can either disable the runtime/GC altogether, or not link with it. This create the most effort but with a guarantee of not having a GC over the whole application. In most cases it's _not worth it_.
>
> The hard part about GC is understanding reachability, but unless you are doing very systemy, this can be safely ignored.
>
> You will be just fine.
>
>> Secondly, how stable is the language and how fast is the pace of development on D?
>
> Language doesn't break nowadays, very stable apart from dreaded regressions with the DMD backends.
>
> http://erdani.com/d/downloads.daily.png
>
>> 2. I am also curious as to what would be the best path for a complete beginner to D to learn it effectively?
>
> "Learning D" book seems fitting.
>
>> 3. Are there some small-scale Open Source projects that you would recommend to peruse to get a feel for and learn idiomatic D?
>
> I run https://p0nce.github.io/d-idioms/ to get up to speed with the weird idiosyncrasies fast. But the above book is way better.

Thanks for your response! I actually started out with "Programming in D", but found it more oriented towards people new to programming. I am currently working through "Learning D", and it's been a real pleasure so far! Thanks for the recommendation.
February 20, 2017
On Sunday, 19 February 2017 at 03:17:08 UTC, Seb wrote:
> On Saturday, 18 February 2017 at 21:09:20 UTC, ag0aep6g wrote:
>>> 5. Supposing I devote the time and energy and get up to speed on D, would the core language team be welcoming if I feel like I can contribute?
>>
>> Absolutely. Anyone is welcome to contribute. D is very much a volunteer effort. Also don't hesitate to point out (or even fix) any stumbling blocks you may encounter when starting out.
>
> I can't add more to this than two pointers:
>
> https://wiki.dlang.org/Starting_as_a_Contributor
> https://wiki.dlang.org/Get_involved

Thanks, bookmarked! Hopefully I will be able to contribute some day.
February 20, 2017
On Sunday, 19 February 2017 at 12:31:51 UTC, ag0aep6g wrote:
> On 02/19/2017 12:51 PM, timmyjose wrote:
>> a). So the GC is part of the runtime even if we specify @nogc
>
> Yup. @nogc is per function, not per program. Other functions are allowed to use the GC.
>
>> b). Do we manually trigger the GC (like Java's System.gc(), even though
>> that's not guaranteed), or does it get triggered automatically when we
>> invoke some operations on heap allocated data and/or when the data go
>> out of scope?
>
> You can trigger a collection manually with GC.collect [1]. Otherwise, the GC can do a collection when you make a GC-managed allocation. If you don't make GC allocations, e.g. because you're in @nogc code and the compiler doesn't allow you to, then no GC collections will happen.
>
>> c). Does Rust have analogues of "new" and "delete", or does it use
>> something like smart pointers by default?
>
> D, not Rust, right?

Yes, indeed!

> D uses `new` for GC allocations. `new` returns a raw pointer or a dynamic array (pointer bundled with length for bounds checking).
>
> There is `delete`, but it's shunned/unfashionable. Maybe it's going to be deprecated, I don't know. You're supposed to let the GC manage deletion, or use `destroy` [2] and GC.free [3] if you have to do it manually.
>
> Of course, you can also call C functions like `malloc` and `free` and do manual memory management.

I actually tried out some of the sample programs given on this in "Learning D", and it was quite smooth indeed. As ketmar mentioned in the other thread, maybe I could use this as a backup strategy till I get comfortable with idiomatic D.

A few things I already like so far about D (just on chapter 2 of the book!):

1). T* x, y applying the pointer type to both variables (this has been a bane for me with C in the past).

2). The cast(T) syntax.

3). The module system appears pretty logical to me so far.

4). The creation of dynamic arrays is quite smooth and intuitive for me, and much easier than in C or C++.

5). I love array slices!

6). Properties!

7). The array initialisation syntax (especially for rectangular arrays) - much more logical than in C++/Java.

8). The use of %s for string interpolation (just like Common Lisps' ~a). Very convenient.


Things I don't like so much:

1). The std.range: iota function(?) is pretty nice, but the naming seems a bit bizarre, but quite nice to use.

2). The automatic conversion rules are nice for avoiding verbose code, but it looks like it might bite one just like in C++.

3). Not so much a fan of "auto", but it does have its uses, of course.

4). I'm still a bit confused by order of dimensions in rectangular arrays:

Suppose I have a simple 2 x 3 array like so:

import std.stdio;
import std.range: iota;

void main() {
	// a 2 x 3 array
	int [3][2] arr;

	foreach (i; iota(0, 2)) {
		foreach(j; iota(0, 3)) {
			arr[i][j] = i+j;
		}
	}

	writefln("second element in first row = %s", arr[0][1]);
	writefln("third element in second row = %s", arr[1][2]);

	writeln(arr);
}

My confusion is this - the declaration of the array is arr [last-dimension]...[first-dimension], but the usage is arr[first-dimension]...[last-dimension]. Am I missing something here?


> Regarding smart pointers, I'm not up to speed. There's std.typecons.Unique [4], but I don't know how it compares to other languages.
>
>
> [1] https://dlang.org/phobos/core_memory.html#.GC.collect
> [2] https://dlang.org/phobos/object.html#.destroy
> [3] https://dlang.org/phobos/core_memory.html#.GC.free


February 20, 2017
timmyjose wrote:

> Suppose I have a simple 2 x 3 array like so:
> import std.stdio;
> import std.range: iota;
> void main() {
> 	// a 2 x 3 array
> 	int [3][2] arr;
> foreach (i; iota(0, 2)) {
> 		foreach(j; iota(0, 3)) {
> 			arr[i][j] = i+j;
> 		}
> 	}
> writefln("second element in first row = %s", arr[0][1]);
> 	writefln("third element in second row = %s", arr[1][2]);
> writeln(arr);
> }
> My confusion is this - the declaration of the array is arr [last-dimension]...[first-dimension], but the usage is arr[first-dimension]...[last-dimension]. Am I missing something here?

yes. it is quite easy to remember if you'll just read the declaration from left to right:
 int[3][2] arr
becomes:
 (int[3])[2]
i.e. "array of two (int[3]) items". no complicated decoding rules.

and then accessing it is logical too: first we'll index array of two items, then `(int[3])` array.

declaration may look "reversed", but after some time i found it straightforward to read. ;-)
February 20, 2017
On Monday, 20 February 2017 at 14:44:41 UTC, timmyjose wrote:
> My confusion is this - the declaration of the array is arr [last-dimension]...[first-dimension], but the usage is arr[first-dimension]...[last-dimension]. Am I missing something here?

I've never understood how anyone could actually like C's weird, backward way of doing arrays. It never made a lick of sense to me.

D is beautifully consistent: each index "peels off" a layer. If you had a function returning a function:

void function(string) foo() {
   return (string name) { writeln("hi, ", name); };
}

Is a zero-arg function that returns a function that takes a string parameter.

How would you call the returned function?

foo("adam")()

or

foo()("adam")

?


Of course, the answer is the second form: the first level of () calls the function `foo`, which returns the function that takes the string parameter.



Arrays are the same thing.

int[2][3] arr;

is a 3-element array of 2-element arrays of int. So, how do you get to the int[2]? You peel away a level of []:

int[2] row = arr[0] // that peels away the [3], leaving an int[2]

int a = row[0]; // peel away the last level, leaving just int



Beautifully consistent, even if you want pointers:

int[2]*[3] arrOfPointers;

arrOfPointers[0] // type int[2]*, aka "pointer to two-element array of int"



And once you realize that opIndex can be overloaded, it makes even more sense:


arr[1][0] gets rewritten to arr.opIndex(1).opIndex(0) - bringing us back to my first example, we almost literally have a function returning a function again. Of course it reads the other direction from declaration!
February 20, 2017
On Sunday, 19 February 2017 at 12:45:49 UTC, ketmar wrote:
> timmyjose wrote:
>> a). So the GC is part of the runtime even if we specify @nogc
> yes. GC is basically just a set of functions and some supporting data structures, it is compiled in druntime. @nogc doesn't turn it off, if says that compiler must ensure that *your* *code* doesn't allocate, at compile time. i.e. @nogc code with GC allocations won't compile at all.
>
>
>> b). Do we manually trigger the GC (like Java's System.gc(), even though that's not guaranteed), or does it get triggered automatically when we invoke some operations on heap allocated data and/or when the data go out of scope?
> GC can be invoked *only* on allocation. as long as you don't allocate GC data, GC will not be called. of course, things like array/string concatenation (and closure creation) allocates, so you'd better be careful with your code if you want to avoid GC in some critical part. or you can call `GC.disable()` to completely disable GC (and `GC.enable()` later, of course ;-).
>
>
>> c). Does Rust have analogues of "new" and "delete", or does it use something like smart pointers by default?
> `new`. no `delete`, tho, as it is not necessary with GC. actually, there is `delete` thingy, but it is deprecated, and you'd better not use it unless you are *really* know what you're doing and why. i.e. don't prematurely optimize your code, especially without good understanding of D's GC.
>
>
>> Fascinating reading about the various use cases that you and others have put D to. It does give me a lot more contextual understanding now. Thank you!
> you're welcome.
>
> as for me, i am using D exclusively for *all* my programming tasks (including writing simple shell scripts ;-) for years. and i don't want to go back to C/C++ or switch to some [new] hyped language. i have 20+ years of programming expirience, and i feel that D is the best language i ever used. don't get me wrong, tho: it doesn't mean that D is the best language on the planet. what i mean is that D has a best balance of features, warts, libs and so on *for* *me*. easy C interop allows me to use all the C libraries out there; C-like syntax allows me to port C code (i did alot of C ports, including NanoVG, NanoSVG, Tremor Vorbis decoder, Opus decoder, etc.); great metaprogramming (for C-like language) allows me to skip writing boilerplate code; and so on. ;-)
>
> also, dmd compiler is easily hackable. trying to even compile gcc is a PITA, for example. and dmd+druntime+phobos takes ~1.5 minutes to build on my old i3.

Very interesting reading about your experiences! I hope that I'll soon be in a position to start churning out my own pet projects as well! :-) ... one thing I've observed is that so far (very very early of course) D appears to be a lot more intuitive than C++, or at least the way I find things intuitive, especially with regard to arrays and slices. The only thing I have to kind of unlearn is the "default immutability" that I picked up from Rust - this confused me a bit at first when I saw how a slice can be spawned off into a brand new array upon assigning data to it (in the book "Learning D", which I find very nice so far).

Just one question about the compilers though - I read on the Wiki that there are three main compiler distros - dmd, ldc, and gdc. I code primarily on a mac, and I have installed both dmd and ldc. A lot of the flags appears to be similar, and for my small programs, compilation and execution speed appeared to be almost identical. However, the book suggested using dmd for dev and probably ldc/gdc for releases. Is this really followed that much in practice, or should I prefer dmd?

One more thing I noticed when I looked into the executable file (using "nm -gU" on my mac) is that I found two interesting symbols - _main and _Dmain. On Rust, for instance, the main function got turned into _main, so I couldn't use a main in the C code that I was trying to interop with from my Rust code. In this case, does the same restriction apply (I am still way too far from dabbling in interop in D as yet! :-)). I mean, suppose I write some sample code in C, and I have a local main function to test the code out locally, will I have to comment that out when invoking that library from D, or can I keep that as is?

February 20, 2017
timmyjose wrote:

> Very interesting reading about your experiences!
tnx. ;-)

> one thing I've observed is that so far (very very early of course) D appears to be a lot more intuitive than C++
yeah. i almost finished writing my own nntp/email client (actually, i'm writing this post with it). something i wanted to do for almost a decade, but never dared with C/C++. and did that in a week with D. ;-)


> Just one question about the compilers though - I read on the Wiki that there are three main compiler distros - dmd, ldc, and gdc. I code primarily on a mac, and I have installed both dmd and ldc. A lot of the flags appears to be similar, and for my small programs, compilation and execution speed appeared to be almost identical. However, the book suggested using dmd for dev and probably ldc/gdc for releases. Is this really followed that much in practice, or should I prefer dmd?
i myself is using dmd for everything. usually i found that even without -O (it means "optimize code" for dmd) my apps are fast enough. even my Speccy emulator is perfectly fine without -O (mind you, it emulates the whole 8-bit machine: CPU, FDD, sound processor, etc., and has to keep constant 50FPS to make sound smooth). that is, if you are doing heavy number crunching, for example, you may want to use ldc. otherwise, stick with dmd: i found that my code spending most of it's time waiting for some i/o completion. ;-)


> One more thing I noticed when I looked into the executable file (using "nm -gU" on my mac) is that I found two interesting symbols - _main and _Dmain. On Rust, for instance, the main function got turned into _main, so I couldn't use a main in the C code that I was trying to interop with from my Rust code. In this case, does the same restriction apply (I am still way too far from dabbling in interop in D as yet! :-)). I mean, suppose I write some sample code in C, and I have a local main function to test the code out locally, will I have to comment that out when invoking that library from D, or can I keep that as is?
as your library will prolly not have "main()" (and will be built as a lib), there should be no problems. i.e. i never had any troubles with symbols with my .so and .a libs (all two of them ;-). i also wrote .so injection code (injecting .so written in D into running process), and had no problems with that too.
February 20, 2017
On Monday, 20 February 2017 at 14:52:43 UTC, ketmar wrote:
> timmyjose wrote:
>
>> Suppose I have a simple 2 x 3 array like so:
>> import std.stdio;
>> import std.range: iota;
>> void main() {
>> 	// a 2 x 3 array
>> 	int [3][2] arr;
>> foreach (i; iota(0, 2)) {
>> 		foreach(j; iota(0, 3)) {
>> 			arr[i][j] = i+j;
>> 		}
>> 	}
>> writefln("second element in first row = %s", arr[0][1]);
>> 	writefln("third element in second row = %s", arr[1][2]);
>> writeln(arr);
>> }
>> My confusion is this - the declaration of the array is arr [last-dimension]...[first-dimension], but the usage is arr[first-dimension]...[last-dimension]. Am I missing something here?
>
> yes. it is quite easy to remember if you'll just read the declaration from left to right:
>  int[3][2] arr
> becomes:
>  (int[3])[2]
> i.e. "array of two (int[3]) items". no complicated decoding rules.
>
> and then accessing it is logical too: first we'll index array of two items, then `(int[3])` array.
>
> declaration may look "reversed", but after some time i found it straightforward to read. ;-)

Hmmm... yes, that does help indeed. So we read that as "each cell in an array of 2 cells is an array with 3 cells"? I'll have to get used to this I suppose!