December 08, 2013
On Sunday, 8 December 2013 at 01:59:15 UTC, Walter Bright wrote:
> But when I talk about refactoring, I mean things like changing data structures and algorithms. Renaming things is pretty far over on the trivial end, and isn't going to help your program run any faster.

Well, I was just so surprised by your answer that I was looking for common ground :-)
December 08, 2013
On 8 December 2013 11:34, Walter Bright <newshound2@digitalmars.com> wrote:

> On 12/7/2013 4:46 PM, Manu wrote:
>
>> True as compared to C, but I wouldn't say this is true in general.
>> C# and Java make it easy to refactor code rapidly. D doesn't have any
>> such tools
>> yet.
>> It's my #1 wishlist item for VisualD and Mono-D, and I await such tooling
>> with
>> great anticipation.
>> Maybe when the D front-end is a library, and tooling has such powerful
>> (and
>> reliable) semantic analysis as the compiler does it may be possible?
>>
>
> Needing a tool to refactor code is a bit of a mystery to me. I've never used one, and never felt that not having one inhibited me from refactoring.
>

Well I'd suspect that given time to use one for a significant portion of
time, you will come to appreciate how much time it can save :)
At least in certain types of code, which perhaps you don't spend an awful
lot of time writing?
I find 'client code' tends to be subject to a higher frequency of trivial
changes and refactorings. This sort of code is less concise, more random;
just stuff that does stuff or responds to events or whatever written around
the place.
Systems code like compilers tend to be a lot more succinct, self contained
and well structured which, supports simpler refactoring internally, but any
such change may require a huge amount of client code to be re-jigged. It's
nice to reliably automate this sort of thing.

Trust me, robust refactoring tools save a lot of time! :)


December 08, 2013
On Sunday, 8 December 2013 at 02:07:50 UTC, Manu wrote:
> Trust me, robust refactoring tools save a lot of time! :)

More than just time, Walter has shown in the past that he appreciates safety and safe practises. For instance, I respect his decision about the limitations of version(), even though it seems limiting to me. Trusting human refactoring to the detriment of a tool seems, comparatively, insane, and cowboy programming, with no trade-off benefit :-)
December 08, 2013
Walter Bright:

> Needing a tool to refactor code is a bit of a mystery to me. I've never used one, and never felt that not having one inhibited me from refactoring.

When you have experience and you know how to do things, you have a great temptation of keep doing it, and limit your explorations and study of alternatives. But this could lead to missed opportunities and crystallization of skills. One strategy to avoid this pitfall is to allocate some time to learn different things, but often this is not enough. A way to improve the situation is to work for some time with a person aged very differently. Younger people seem ignorant, but teaching them for some time is not a waste of time because their age helps them being not burdened by very old ways of doing things, and they teach back you a lot.

Refactoring tools are an important part of modern programming. If you don't understand why, then I suggest you to stop debugging D for few days, install a modern IDE, find a good open source Java project on GitHub, and follow some tutorials that explain how to refactor Java code. In few days you could send some patches to the project and learn what modern IDEs do. It's even better if you find some younger person willing to work with you in this, but this is not essential :-)

Bye,
bearophile
December 08, 2013
On Saturday, 7 December 2013 at 09:55:10 UTC, Timon Gehr wrote:
> I suggest what is meant by "proper" is "faster than any implementation in those languages". :)

Exactly, proper C is anything that runs as fast as possible, screw presentation and maintainability, that isn't C.
December 08, 2013
On Saturday, 7 December 2013 at 08:31:08 UTC, Marco Leise wrote:
> How is that easier in Java? When whole-program analysis finds
> that there is no class extending C, it could devirtualize all
> methods of C, but(!) you can load and unload new derived
> classes at runtime, too.
>
> Also the JVM doesn't load all classes at program startup,
> because it would create too much of a delay. This goes so
> far that there is even a special class for splash screens with
> minimal dependencies, to avoid loading most of the runtime and
> GUI library first.
>
> I think whole-program analysis in such an environment is
> outright impossible.

You forgot that this is JITTed. The JVM can reemit the code for a function when assumption on its optimization do not hold.

When the JVM load new classes, it invalidate a bunch of code. That mean that a function can be final, then virtual, then back to final, etc ..., during the program execution.
December 08, 2013
On 12/7/2013 6:07 PM, Manu wrote:
> At least in certain types of code, which perhaps you don't spend an awful lot of
> time writing?

Funny thing about that. Sometimes I'll spend all day on a piece of code, then check it in. I'm surprised that the diffs show my changes were very small.

I suppose I spend far more time thinking about code than writing it.
December 08, 2013
On Saturday, 7 December 2013 at 00:09:01 UTC, John Colvin wrote:
> On Friday, 6 December 2013 at 23:56:39 UTC, H. S. Teoh wrote:
>>
>> It would be nice to decouple Phobos modules more. A *lot* more.
>
> Why? I've seen this point made several times and I can't understand why this is an important concern.
>
> I see the interplay between phobos modules as good, it saves reinventing the wheel all over the place, making for a smaller, cleaner standard library.
>
> Am I missing something fundamental here?

On the introduction page of the Phobos documentation, as part of it's philosophy, it states

"Classes should strive to be independent of one another
    It's discouraging to pull in a megabyte of code bloat by just trying to read a file into an array of bytes. Class independence also means that classes that turn out to be mistakes can be deprecated and redesigned without forcing a rewrite of the rest of the class library."

(This can also apply to functions, templates and modules).

Currently, Phobos does exactly that. It pulls in a lot of bloat to perform trivial tasks, and it is discouraging. More importantly it is difficult to  isolate any part of Phobos. When trying to avoid any part of Phobos because of bugginess or inefficiency, I find it next to impossible because chances are, it will be used by some other part of Phobos.

I am speculating here, but I imagine that maintaining and debugging Phobos must be a nightmare. Can anybody speak from experience on this?

One think I have discovered is that Phobos introduces "junk code" into executables. One time I did an experiment. I copied the bits of Phobos that my program used into a separate file and imported that instead of the Phobos modules. The resultant executable was half the size (using -release, -inline, -O and "strip" in both cases). For some reason, Phobos was adding over 250KB of junk code that strip could not get rid of.

Regards
Jason
December 08, 2013
On Sun, Dec 08, 2013 at 05:19:53AM +0100, Jason den Dulk wrote: [...]
> I am speculating here, but I imagine that maintaining and debugging Phobos must be a nightmare. Can anybody speak from experience on this?

Actually, while Phobos does have its warts, it's surprisingly pleasant to read and maintain, thanks to the readability features of D. I've found it to be easy to read, and mostly easy to understand. There are some ugly bits here and there, of course, but compared to, say, glibc, it's extremely readable for a standard library.

(And FWIW, I'm only a part-time Phobos volunteer, so I'm saying this not because I've an agenda to defend Phobos, but because I genuinely find it a pleasant surprise compared to most standard libraries of other languages that I've seen.)


> One think I have discovered is that Phobos introduces "junk code" into executables. One time I did an experiment. I copied the bits of Phobos that my program used into a separate file and imported that instead of the Phobos modules. The resultant executable was half the size (using -release, -inline, -O and "strip" in both cases). For some reason, Phobos was adding over 250KB of junk code that strip could not get rid of.
[...]

Yeah, this part bothers me too. Once I hacked up a script (well, a little D program :P) that disassembles D executables and builds a reference graph of its symbols. I ran this on a few small test programs, and was quite dismayed to discover that the mere act of importing std.stdio (for calling writeln("Hello World");) will introduce symbols from std.complex into your executable, even though the program has nothing to do with complex numbers. These symbols are never referenced from main() (i.e., the reference graph of the std.complex symbols are disjoint from the subgraph that contains _Dmain), yet they are included in the executable.

That's why I said that Phobos has a ways to go in terms of modularity and dependency management. Just because std.complex is used by *some* obscure bit of code in std.stdio, doesn't mean that it should get pulled in just because you want to print Hello World. The compiler could also be a bit smarter about which symbols it emits code for, eliding those that are never actually referenced in the program.

While my overall D experience has been quite positive, this is one of the things that I found disappointing.


T

-- 
To err is human; to forgive is not our policy. -- Samuel Adler
December 08, 2013
On Sun, Dec 08, 2013 at 03:01:03AM +0100, digitalmars-d-bounces@puremagic.com wrote:
> On Sunday, 8 December 2013 at 01:59:15 UTC, Walter Bright wrote:
> >But when I talk about refactoring, I mean things like changing data structures and algorithms. Renaming things is pretty far over on the trivial end, and isn't going to help your program run any faster.
> 
> Well, I was just so surprised by your answer that I was looking for common ground :-)

OTOH, I was quite confused the first time somebody talked about "refactoring" to refer to variable renaming. To me, refactoring means reorganizing your code, like factoring out common code into separate functions, and moving stuff around modules, and substituting algorithms; the kind of major code surgery where you go through every line (or every block) and re-stitch things together in a new (and hopefully cleaner) way.  Variable renaming sounds almost like a joke to me in comparison. I was quite taken aback that people would think "variable renaming" when they say "refactoring", to be quite honest.

Or maybe this is just another one of those cultural old age indicators? Has the term "refactoring" shifted to mean "variable renaming" among the younger coders these days? Genuine question. I'm baffled that these two things could even remotely be considered similar things.


T

-- 
Don't throw out the baby with the bathwater. Use your hands...