September 30, 2009
Jeremie Pelletier wrote:
> Rainer Deyke wrote:
>> This only catches null errors at runtime.  The whole point of a non-null type is to catch null errors at compile time.
>>
> 
> Thats what flow analysis is for, since these are mostly uninitialized variables rather than null ones.

Nitpick: there are no uninitialized variables in D (unless you especially request them).  There are explicitly initialized variables and default-initialized variables.

I can see the argument for disabling default initialization and requiring explicit initialization.  You don't even need flow analysis for that.  However, that doesn't address the problem that non-null references are intended to solve.  It's still possible to explicitly store a null values in non-null references without the problem being detected at compile time.


-- 
Rainer Deyke - rainerd@eldwood.com
September 30, 2009
On 29/09/2009 16:41, Jeremie Pelletier wrote:

> What I argued about was your view on today's software being too big and
> complex to bother optimize it.
>

that is not what I said.
I was saying that hand optimized code needs to be kept at minimum and only for visible bottlenecks, because the risk of introducing low-level unsafe code is bigger in more complex and bigger software.
September 30, 2009
bearophile wrote
> Walter Bright:
>
>>No, it is done with one indirection.<
>
> If even Andrei, a quite intelligent person that has written big books on C++, may be wrong on such a basic thing, then I think there's a problem.
>
> It can be good to create an html page that explains how some basic things of D are implemented in the front-end. Such page can also contain box & arrow images that show how structures and memory are organized for various of such data structures.
>
> Such html page is useful for both normal programmers that want to understand what's under the hood, and for people that may want to fix/modify the front-end.


?:)
I seem to have requested the thing you here ask for.
(within 24 hours even)
http://d.puremagic.com/issues/show_bug.cgi?id=3351


September 30, 2009
Yigal Chripun wrote:
> On 29/09/2009 16:41, Jeremie Pelletier wrote:
> 
>> What I argued about was your view on today's software being too big and
>> complex to bother optimize it.
>>
> 
> that is not what I said.
> I was saying that hand optimized code needs to be kept at minimum and only for visible bottlenecks, because the risk of introducing low-level unsafe code is bigger in more complex and bigger software.

What's wrong with taking a risk? If you know what you're doing where is the risk, and if now how will you learn? If you write your software correctly, you could add countless assembly optimizations and never compromise the security of the entire thing, because these optimizations are isolated, so if it crashes there you have only a narrow area to debug within.

There are some parts where hand optimizing is almost useless, like network I/O since latency is already so high having a faster code won't make a difference.

And sometimes the optimization doesn't even need assembly, it just requires using a different high level construct or a different algorithm. The first optimization is to get the most efficient data structures with the most efficient algorithms for a given task, and THEN if you can't optimize it more you dig into assembly.

People seem to think assembly is something magical and incredibly hard, it's not.

Jeremie
September 30, 2009
Jeremie Pelletier wrote:
> Yigal Chripun wrote:
>> On 29/09/2009 16:41, Jeremie Pelletier wrote:
>>
>>> What I argued about was your view on today's software being too big and
>>> complex to bother optimize it.
>>>
>>
>> that is not what I said.
>> I was saying that hand optimized code needs to be kept at minimum and only for visible bottlenecks, because the risk of introducing low-level unsafe code is bigger in more complex and bigger software.
> 
> What's wrong with taking a risk? If you know what you're doing where is the risk, and if now how will you learn? If you write your software correctly, you could add countless assembly optimizations and never compromise the security of the entire thing, because these optimizations are isolated, so if it crashes there you have only a narrow area to debug within.
> 
> There are some parts where hand optimizing is almost useless, like network I/O since latency is already so high having a faster code won't make a difference.
> 
> And sometimes the optimization doesn't even need assembly, it just requires using a different high level construct or a different algorithm. The first optimization is to get the most efficient data structures with the most efficient algorithms for a given task, and THEN if you can't optimize it more you dig into assembly.
> 
> People seem to think assembly is something magical and incredibly hard, it's not.
> 
> Jeremie

Also, if you're using asm on something other than a small, simple loop, you're probably doing something badly wrong. Therefore, it should always be localised, and easy to test thoroughly. I don't think local extreme optimisation is a big risk.

Greater risks come from using more complicated algorithms. Brute-force algorithms are always the easiest ones to get right <g>.


September 30, 2009
Saaa wrote:
> bearophile wrote
>> Walter Bright:
>>
>>> No, it is done with one indirection.<
>> If even Andrei, a quite intelligent person that has written big books on C++, may be wrong on such a basic thing, then I think there's a problem.
>>
>> It can be good to create an html page that explains how some basic things of D are implemented in the front-end. Such page can also contain box & arrow images that show how structures and memory are organized for various of such data structures.
>>
>> Such html page is useful for both normal programmers that want to understand what's under the hood, and for people that may want to fix/modify the front-end.
> 
> 
> ?:)
> I seem to have requested the thing you here ask for.
> (within 24 hours even)
> http://d.puremagic.com/issues/show_bug.cgi?id=3351 

I wonder whether this would be a good topic for TDPL. Currently I'm thinking it's too low-level. I do plan to insert a short section about implementation, just not go deep inside the object model.

Andrei
September 30, 2009
Don wrote:
> Jeremie Pelletier wrote:
>> Yigal Chripun wrote:
>>> On 29/09/2009 16:41, Jeremie Pelletier wrote:
>>>
>>>> What I argued about was your view on today's software being too big and
>>>> complex to bother optimize it.
>>>>
>>>
>>> that is not what I said.
>>> I was saying that hand optimized code needs to be kept at minimum and only for visible bottlenecks, because the risk of introducing low-level unsafe code is bigger in more complex and bigger software.
>>
>> What's wrong with taking a risk? If you know what you're doing where is the risk, and if now how will you learn? If you write your software correctly, you could add countless assembly optimizations and never compromise the security of the entire thing, because these optimizations are isolated, so if it crashes there you have only a narrow area to debug within.
>>
>> There are some parts where hand optimizing is almost useless, like network I/O since latency is already so high having a faster code won't make a difference.
>>
>> And sometimes the optimization doesn't even need assembly, it just requires using a different high level construct or a different algorithm. The first optimization is to get the most efficient data structures with the most efficient algorithms for a given task, and THEN if you can't optimize it more you dig into assembly.
>>
>> People seem to think assembly is something magical and incredibly hard, it's not.
>>
>> Jeremie
> 
> Also, if you're using asm on something other than a small, simple loop, you're probably doing something badly wrong. Therefore, it should always be localised, and easy to test thoroughly. I don't think local extreme optimisation is a big risk.

That's also how I do it once I find the ideal algorithm, I've never had any problems or seen any risk with this technique, I did see some good performance gains however.

> Greater risks come from using more complicated algorithms. Brute-force algorithms are always the easiest ones to get right <g>.

I'm not sure I agree with that. Those algorithms are pretty isolated and really easy to write unittests for so I don't see where the risk is when writing more complex algorithms, it's obviously harder, but not riskier.

On the other hand, things like GUI libraries are one big package where unittests are useless most of the time, that's a much greater risk even with straightforward and trivial code.

I read somewhere that the best optimizer is between your ears, I have yet to see someone or something prove that quote wrong! Besides how are you going to get comfortable with "complex" stuff if you never play with it, its really only complex when you're learning it, once it has been assimilated by the brain its become almost trivial to use.
September 30, 2009
Andrei Alexandrescu wrote:
> Saaa wrote:
>> bearophile wrote
>>> Walter Bright:
>>>
>>>> No, it is done with one indirection.<
>>> If even Andrei, a quite intelligent person that has written big books on C++, may be wrong on such a basic thing, then I think there's a problem.
>>>
>>> It can be good to create an html page that explains how some basic things of D are implemented in the front-end. Such page can also contain box & arrow images that show how structures and memory are organized for various of such data structures.
>>>
>>> Such html page is useful for both normal programmers that want to understand what's under the hood, and for people that may want to fix/modify the front-end.
>>
>>
>> ?:)
>> I seem to have requested the thing you here ask for.
>> (within 24 hours even)
>> http://d.puremagic.com/issues/show_bug.cgi?id=3351 
> 
> I wonder whether this would be a good topic for TDPL. Currently I'm thinking it's too low-level. I do plan to insert a short section about implementation, just not go deep inside the object model.
> 
> Andrei

Maybe that's a topic for an appendix of the book. It is really useful to know the internals of a language, even if you don't directly use them it can impact design choices.

Right now the best way to learn these internals is still to go hack and slash with the compiler's runtime implementation.

Besides, there is no such thing as too low-level :)
September 30, 2009
Andrei Alexandrescu:

> I wonder whether this would be a good topic for TDPL. Currently I'm thinking it's too low-level. I do plan to insert a short section about implementation, just not go deep inside the object model.

It's a very good topic for the book. Any good book about computer languages teaches not just a language, but also good programming practices and some general computer science too. In a big book about a system language I want to see "under the cover" topics too, otherwise I'll need to buy another book to learn them :-) So it's good for a book about a system language to explain how some parts of the compiler are implemented, because such parts are code too (and the level of such code can be the same, if someday will translate the D front-end to D).
For example I have appreciated the chapter about Python Dict implementation in a chapter of "Beautiful code".
I think you aren't interested in my help any more, but I hope you will follow this suggestion of mine (I'll buy your book anyway, but I know what I'd like to find in it). On the other hand writing about topics you don't know enough about may be negative, in such situation avoiding the topic may be better.

Bye,
bearophile
September 30, 2009
Andrei Alexandrescu wrote
>
> I wonder whether this would be a good topic for TDPL. Currently I'm thinking it's too low-level. I do plan to insert a short section about implementation, just not go deep inside the object model.
>
> Andrei

I'd really love to see more about implementations as it makes me twitch to use something I don't really know the impact of.

As for using diagrams and other visual presentations:
Please use them as much as possible;
e.g. Pointers without arrows is like a film without moving pictures :)