September 25, 2014
On Wed, Sep 24, 2014 at 08:55:23PM -0700, Walter Bright via Digitalmars-d wrote:
> On 9/24/2014 7:50 PM, Manu via Digitalmars-d wrote:
> >>I'm sorry, but this is awfully vague and contains nothing actionable.
> >The action I'd love to see would be "Yes, debugging is important, we should add it at a high priority on the roadmap and encourage the language community to work with the tooling community to make sure the experience is polished" ;)
> 
> I make similar statements all the time. It doesn't result in action on anyone's part. I don't tell people what to do - they work on aspects of D that interest them.
> 
> Even people who ask me what to work on never follow my suggestions. They work on whatever floats their boat. It's my biggest challenge working on free software :-)

Yeah, this is characteristic of free software. If this were proprietary software like what I write at work, the PTBs would just set down items X, Y, Z as their mandate, and everyone would have to work on it, like it or not. With free software, however, if something isn't getting done, you just gotta get your hands dirty and do it yourself. Surprisingly, many times what comes out can be superior to the cruft churned out by "enterprise" programmers who were forced to write something they didn't really want to.


[...]
> Note that I find gdb well nigh unusable even for C++ code, so to me an unusable debugger is pretty normal and I don't think much about it. :-) It doesn't impair my debugging sessions much.

printf debugging FTW! :-P


> I've also found that the more high level abstractions are used, the less useful a symbolic debugger is. Symbolic debuggers are only good for pedestrian, low level code that ironically is also what other methods are very good at, too.
[...]

I don't agree with that. I think symbolic debuggers should be improved so that they *can* become useful with high level abstractions. For example, if debuggers could be made to understand templates and compile-time constants, they could become much more useful than they are today in debugging high-level code.

For example, the idea of stepping through lines of code (i.e. individual statements) is a convenient simplification, but really, in modern programming languages there are multiple levels of semantics that could have a meaningful concept of "stepping forward/backward". You could step through individual expressions or subexpressions, step over function calls whose return values are passed to an enclosing function call, or step through individual arithmetic operations in a subexpression. Each of these levels of stepping could be useful in certain contexts, depending on what kind of bug you're trying to track down. Sometimes having statements as the stepping unit is too coarse-grained for certain debugging operations. Sometimes they are too fine-grained for high levels of abstractions. Ideally, there should be a way for the debugger to dissect your code into its constituent parts, at various levels of expression, for example:

statement:	[main.d:123]	auto x = f(x,y/2,z) + z*2;
==>	variable allocation: [hoisted to beginning of function]
==>	evaluate expression: f(x,y/2,z) + z*2
	==>	evaluate expression: f(x,y/2,z)
		==>	evaluate expression: x
			==> load x
		==>	evaluate expression: y/2
			==> load y: [already in register eax]
			==> load 2: [part of operation: /]
			==> arithmetic operation: /
		==>	evaluate expression: z
		==>	function call: f
	==>	evaluate expression: z*2
		==>	load z: [already in register ebx]
		==>	load 2: [optimized away]
		==>	arithmetic operation: / [optimized to z<<1]
	==>	evaluate sum
		==>	expression result: [in register edx]
==>	assign expression to x
	==>	store x

The user can choose which level of detail to zoom into, and the debugger would allow stepping through each operation at the selected level of detail (provided it hasn't been optimized away -- if it did, ideally the debugger would tell you what the optimized equivalent is).


T

-- 
Public parking: euphemism for paid parking. -- Flora
September 25, 2014
On Wed, Sep 24, 2014 at 09:44:26PM -0700, Walter Bright via Digitalmars-d wrote:
> On 9/24/2014 9:26 PM, Andrei Alexandrescu wrote:
> >The build system that will be successful for D will cooperate with the compiler, which will give it fine-grained dependency information. Haskell does the same with good results.

I didn't specify *how* the build system would implement automatic dependency now, did I? :-) Nowhere did I say that the build system will (re)invent its own way of deriving source file dependencies.  FYI, Tup is able to tell exactly what file(s) are read by the compiler in compiling a particular program (or source file) automatically, thus its dependency graph is actually accurate, unlike some build systems that depend on source-level scanning, which would lead to the problems you describe with conditional local imports.


> There's far more to a build system than generating executables. And there's more to generating executables than D source files (there may be C files in there, and C++ files, YACC files, and random other files).
> 
> Heck, dmd uses C code to generated more .c source files. I've seen more than one fabulous build system that couldn't cope with that.

Which build system would that be? I'll be sure to avoid it. :-P

I've written SCons scripts that correctly handles automated handling of auto-generated source files. For example, a lex/flex source file gets compiled to a .c source file which in turn compiles to the object file that then gets linked with the executable.

Heck, I have a working SCons script that handles the generation of animations from individual image frames which are in turn generated by invocations of povray on scene files programmatically generated by a program that reads script input and polytope definitions in a DSL and computes each scene file. The image generation includes scripted trimming and transparency adjustments of each individual frame, specified *in the build spec* via imagemagick, and the entire process from end to end is fully automatically parallelized by SCons, which is able to correctly sequence each step in a website project that has numerous such generation tasks, interleaving multiple generation procedures as CPUs become free without any breakage in dependencies. This process even optionally includes a final deployment step which copies the generated files into a web directory, and it is able to detect steps for which the products haven't changed from the last run and elide redundant copying of the unchanged files to the web directory, thus preserving last-updated timestamps on the target files.

So before you bash modern build systems in favor of make, do take some time to educate yourself about what they're actually capable of.  :-) You'll be a lot more convincing then.


> Make is the C++ of build systems. It may be ugly, but you can get it to work.

If you like building real airplanes out of Lego pieces, be my guest. Me, I prefer using more suitable tools. :-P


T

-- 
The diminished 7th chord is the most flexible and fear-instilling chord. Use it often, use it unsparingly, to subdue your listeners into submission!
September 25, 2014
> Actually you can't do this for D properly without enlisting the help of the compiler. Scoped import is a very interesting conditional dependency (it is realized only if the template is instantiated).
>
> Also, lazy opening of imports is almost guaranteed to have a huge good impact on build times.
>
> Your reply confirms my worst fear: you're looking at yet another general build system, of which there are plenty of carcasses rotting in the drought left and right of highway 101.
>

This is one of my biggest frustrations with existing "build systems" - which really are nothing more than glorified "make"s with some extra syntax and - for the really advanced ones - ways to help you correctly specify your makefiles by flagging errors or missing dependencies.

> The build system that will be successful for D will cooperate with the compiler, which will give it fine-grained dependency information. Haskell does the same with good results.
>
>
> Andrei

The compiler has a ton of precise information useful for build tools, IDEs and other kinds of analysis tools (to this day, it still bugs the crap out of me that Visual Studio has effectively *two* compilers, one for intellisense and one for the command-line and they do not share the same build environment or share the work they do!)  Build is more than just producing a binary - it incorporates validation through testing, packaging for distribution, deployment and even versioning.  I'd like to unlock the data in our tools and find ways to leverage it to improve automation and the whole developer workflow.  Those ideas and principles go beyond D and the compiler of course, but we do have a nice opportunity here because we can work closely with the compiler authors, rather than having to rely *entirely* on OS-level process introspection through e.g. detours (which is still valuable from a pure dependency discovery process of course.)

If we came out of this project with "tup-for-D" I'd consider that an abject failure.
September 25, 2014
On 9/24/2014 10:08 PM, H. S. Teoh via Digitalmars-d wrote:
> If you like building real airplanes out of Lego pieces, be my guest. Me,
> I prefer using more suitable tools. :-P

I spend very little time fussing with make. Making it work better (even to 0 cost) will add pretty much nothing to my productivity.

September 25, 2014
On 9/24/14, 10:14 PM, Cliff wrote:
> This is one of my biggest frustrations with existing "build systems" -
> which really are nothing more than glorified "make"s with some extra
> syntax and - for the really advanced ones - ways to help you correctly
> specify your makefiles by flagging errors or missing dependencies.

It's nice you two are enthusiastic about improving that space. Also, it's a good example of how open source development works. I can't tell you what to do, you guys get to work on whatever strikes your fancy. Have fun! -- Andrei

September 25, 2014
On 9/24/2014 9:43 PM, H. S. Teoh via Digitalmars-d wrote:
> printf debugging FTW! :-P

There's more than that, but yeah. Most of my types I'll write a "pretty printer" for, and use that. No conceivable debugger can guess how I want to view my data.

For example, I can pretty-print an Expression as either a tree or in infix notation.


> I don't agree with that. I think symbolic debuggers should be improved
> so that they *can* become useful with high level abstractions. For
> example, if debuggers could be made to understand templates and
> compile-time constants, they could become much more useful than they are
> today in debugging high-level code.

The fact that they aren't should be telling. Like maybe it's an intractable problem :-) sort of like debugging optimized code.

September 25, 2014
On Wed, Sep 24, 2014 at 10:23:48PM -0700, Walter Bright via Digitalmars-d wrote:
> On 9/24/2014 10:08 PM, H. S. Teoh via Digitalmars-d wrote:
> >If you like building real airplanes out of Lego pieces, be my guest. Me, I prefer using more suitable tools. :-P
> 
> I spend very little time fussing with make. Making it work better (even to 0 cost) will add pretty much nothing to my productivity.

Oh? Let's see. One time, while git bisecting to track down a dmd regression, I was running into all sorts of strange inconsistent behaviour from dmd. After about a good 15-30 mins' worth of frustration, I tracked down the source of the problem to make not cleaning up previous .o files, and thus producing a corrupted dmd which contained a mixture of who knows what versions of each .o left behind from previous git bisect steps. So I realized that I had to do a make clean every time to ensure I'm actually getting the dmd I think I'm getting. Welp, that just invalidated my entire git bisect session so far. So git bisect reset and start over.

Had dmd used a reliable build system, I wouldn't have wasted that time, plus I'd have the benefit of incremental builds instead of the extra time spent running make clean, and *then* rebuilding everything from scratch. Yup, it wouldn't *add* to my productivity, but it certainly *would* cut down on my *unproductivity*!

Now, dmd's makefile is very much on the 'simple' end of the scale, which I'm sure you'll agree if you've seen the kind of makefiles I have to deal with at work. Being simple means it also doesn't expose many of make's myriad problems. I've had to endure through builds that take 30 minutes to complete for a 1-line code change (and apparently I'm already counted lucky -- I hear of projects whose builds could span hours or even *days* if you're unlucky enough to have to build on a low-end machine), only to find that the final image was corrupted because somewhere in that dense forest of poorly-hackneyed makefiles in the source tree somebody had forgotten to cleanup a stray .so file, which is introducing the wrong versions of the wrong symbols to the wrong places, causing executables to go haywire when deployed.

Not to mention that some smart people in the team have decided that needing to 'make clean' every single time following an svn update is "normal" practice, thus every makefile in their subdirectory is completely broken, non-parallizable, and extremely fragile. Hooray for the countless afternoons I spent fixing D bugs instead of doing paid work -- because I've to do yet another `make clean; make` just to be sure any subsequent bugs I find are actual bugs, and not inconsistent builds caused by our beloved make. You guys should be thankful, as otherwise I would've been too productive to have time to fix D bugs. :-P

And let's not forget the lovely caching of dependency files from gcc that our makefiles attempt to leverage in order to have more accurate dependency information -- information which is mostly worthless because you have to make clean; make after making major changes anyway -- one time I didn't due to time pressure, and was rewarded with another heisenbug caused by stale .dep files causing some source changes to not be reflected in the build. Oh yeah, spent another day or two trying to figure that one out.

Oh, and did I mention the impossibility of parallelizing our builds because of certain aforementioned people who think `make clean; make` is "normal workflow"? I'd hazard to guess I could take a year off work from all the accumulated unproductive times waiting for countless serial builds to complete, where parallelized builds would've saved at least half that time, more on modern PCs.

Reluctance to get rid of make is kinda like reluctance to use smart pointers / GC because you enjoy manipulating raw pointers in high-level application code. You can certainly do many things with raw pointers, and do it very efficiently 'cos you've already memorized the various arcane hacks needed to make things work over the years -- recite them in your sleep even. It's certainly more productive than spending downtime learning how to use smart pointers or, God forbid, the GC -- after you discount all the time and effort expended in tracking down null pointer segfaults, dangling pointer problems, memory corruption issues, missing sizeof's in malloc calls, and off-by-1 array bugs, that is.

To each his own, I say. :-P


T

-- 
People tell me that I'm skeptical, but I don't believe them.
September 25, 2014
On 9/24/2014 11:05 PM, H. S. Teoh via Digitalmars-d wrote:
> On Wed, Sep 24, 2014 at 10:23:48PM -0700, Walter Bright via Digitalmars-d wrote:
>> On 9/24/2014 10:08 PM, H. S. Teoh via Digitalmars-d wrote:
>>> If you like building real airplanes out of Lego pieces, be my guest.
>>> Me, I prefer using more suitable tools. :-P
>>
>> I spend very little time fussing with make. Making it work better
>> (even to 0 cost) will add pretty much nothing to my productivity.
>
> Oh? Let's see. One time, while git bisecting to track down a dmd
> regression, I was running into all sorts of strange inconsistent
> behaviour from dmd. After about a good 15-30 mins' worth of frustration,
> I tracked down the source of the problem to make not cleaning up
> previous .o files, and thus producing a corrupted dmd which contained a
> mixture of who knows what versions of each .o left behind from previous
> git bisect steps. So I realized that I had to do a make clean every time
> to ensure I'm actually getting the dmd I think I'm getting. Welp, that
> just invalidated my entire git bisect session so far. So git bisect
> reset and start over.
>
> Had dmd used a reliable build system, I wouldn't have wasted that time,
> plus I'd have the benefit of incremental builds instead of the extra
> time spent running make clean, and *then* rebuilding everything from
> scratch. Yup, it wouldn't *add* to my productivity, but it certainly
> *would* cut down on my *unproductivity*!
>
> Now, dmd's makefile is very much on the 'simple' end of the scale, which
> I'm sure you'll agree if you've seen the kind of makefiles I have to
> deal with at work. Being simple means it also doesn't expose many of
> make's myriad problems. I've had to endure through builds that take 30
> minutes to complete for a 1-line code change (and apparently I'm already
> counted lucky -- I hear of projects whose builds could span hours or
> even *days* if you're unlucky enough to have to build on a low-end
> machine), only to find that the final image was corrupted because
> somewhere in that dense forest of poorly-hackneyed makefiles in the
> source tree somebody had forgotten to cleanup a stray .so file, which is
> introducing the wrong versions of the wrong symbols to the wrong places,
> causing executables to go haywire when deployed.
>
> Not to mention that some smart people in the team have decided that
> needing to 'make clean' every single time following an svn update is
> "normal" practice, thus every makefile in their subdirectory is
> completely broken, non-parallizable, and extremely fragile. Hooray for
> the countless afternoons I spent fixing D bugs instead of doing paid
> work -- because I've to do yet another `make clean; make` just to be
> sure any subsequent bugs I find are actual bugs, and not inconsistent
> builds caused by our beloved make. You guys should be thankful, as
> otherwise I would've been too productive to have time to fix D bugs. :-P
>
> And let's not forget the lovely caching of dependency files from gcc
> that our makefiles attempt to leverage in order to have more accurate
> dependency information -- information which is mostly worthless because
> you have to make clean; make after making major changes anyway -- one
> time I didn't due to time pressure, and was rewarded with another
> heisenbug caused by stale .dep files causing some source changes to not
> be reflected in the build. Oh yeah, spent another day or two trying to
> figure that one out.
>
> Oh, and did I mention the impossibility of parallelizing our builds
> because of certain aforementioned people who think `make clean; make` is
> "normal workflow"? I'd hazard to guess I could take a year off work from
> all the accumulated unproductive times waiting for countless serial
> builds to complete, where parallelized builds would've saved at least
> half that time, more on modern PCs.
>
> Reluctance to get rid of make is kinda like reluctance to use smart
> pointers / GC because you enjoy manipulating raw pointers in high-level
> application code. You can certainly do many things with raw pointers,
> and do it very efficiently 'cos you've already memorized the various
> arcane hacks needed to make things work over the years -- recite them in
> your sleep even. It's certainly more productive than spending downtime
> learning how to use smart pointers or, God forbid, the GC -- after you
> discount all the time and effort expended in tracking down null pointer
> segfaults, dangling pointer problems, memory corruption issues, missing
> sizeof's in malloc calls, and off-by-1 array bugs, that is.
>
> To each his own, I say. :-P

You noted my preference for simple makefiles (even if they tend to get verbose). I've been using make for 30 years now, and rarely have problems with it. Of course, I also eschew using every last feature of make, which too many people feel compelled to do. So no, my makefiles don't consist of "arcane hacks". They're straightforward and rather boring.

And I use make -j on posix for parallel builds, it works fine on dmd.

September 25, 2014
On Wed, Sep 24, 2014 at 10:30:49PM -0700, Walter Bright via Digitalmars-d wrote:
> On 9/24/2014 9:43 PM, H. S. Teoh via Digitalmars-d wrote:
> >printf debugging FTW! :-P
> 
> There's more than that, but yeah. Most of my types I'll write a "pretty printer" for, and use that. No conceivable debugger can guess how I want to view my data.
> 
> For example, I can pretty-print an Expression as either a tree or in infix notation.

gdb does allow calling your program's functions out-of-band in 'print'. I've used that before when debugging C++, which is a pain when lots of templates are involved (almost every source line is demangled into an unreadable glob of <> gibberish 15 lines long). Wrote a pretty-printing free function in my program, and used `print pretty_print(my_obj)` from gdb. Worked wonders!

Having said that, though, I'm still very much a printf/writeln-debugging person. It also has the benefit of working in adverse environments like embedded devices where the runtime environment doesn't let you run gdb.


> >I don't agree with that. I think symbolic debuggers should be improved so that they *can* become useful with high level abstractions. For example, if debuggers could be made to understand templates and compile-time constants, they could become much more useful than they are today in debugging high-level code.
> 
> The fact that they aren't should be telling. Like maybe it's an intractable problem :-) sort of like debugging optimized code.

When all else fails, I just disassemble the code and trace it along side-by-side with the source code. Not only it's a good exercise in keeping my assembly skills sharp, you also get to see all kinds of tricks that optimizers nowadays can do in action. Code hoisting, rearranging, register assignments to eliminate subsequent loads, vectorizing, etc.. Fun stuff.  Not to mention the thrill when you finally identify the cause of the segfault by successfully mapping that specific instruction to a specific construct in the source code -- not a small achievement in this day and age of optimizing compilers and pipelining, microcode CPUs!

Nevertheless, I think there is still room for debuggers to improve. Recently, for example, I learned that gdb has acquired the ability to step through a program backwards. Just missed the point in your program where the problem first happened? No problem, just step backwards until you get back to that point! Neat stuff. (How this is implemented is left as an exercise for the reader. :-P)


T

-- 
Stop staring at me like that! It's offens... no, you'll hurt your eyes!
September 25, 2014
On Wed, 24 Sep 2014 21:15:59 -0700
"H. S. Teoh via Digitalmars-d" <digitalmars-d@puremagic.com> wrote:

> needs to write should in theory be simply:
> 
> 	Program("mySuperApp", "src/main.d");
> 
> and everything else will be automatically figured out.
ah, that's exactly why i migrated to jam! i got bored of manual dependency control, and jam file scanning works reasonably well (for c and c++, i'm yet to finish D scanner -- it should understand what package.d is). now i'm just writing something like

  Main myproggy : main.d module0.d module1.d ;

and that's all. or even

  Main myproggy : [ Glob . : *.d : names-only ] ;

and for projects which contains some subdirs, libs and so on it's still easy.

i never tried jam on really huge projects, but i can't see why it shouldn't be good, as it supports subdirs without recursion and so on.

jam is still heavy file-based and using only timestamps, but it's only 'cause i'm still not motivaded enough (read: timestamps works for me).


p.s. i'm talking about my own fork of 'jam' here. it's slightly advanced over original jam.

p.p.s. i patched gdc to emit all libraries mentioned in
`pragma(lib, ...)` and my jam understands how to extract and use this
information.