December 02, 2009
dsimcha:

>because Python's builtin arrays are too slow.<

Python lists are not badly implemented, it's the interpreter that's slow (*).
Python built-in arrays (lists) are dynamically typed, so they are less efficient but more flexible. NumPy arrays are the opposite. So as usual with data structures, they are a result of compromises and are chosen an optimized for your purposes.

(*) And the interpreter is slow because it's designed to be simple. Being simple it's possible for not very expert people too, people that do it in their free time, to hack and fix the Python C source code. This allows CPython to keep enough developers, so the language keeps improving. In the Python design there are many lessons like this that D developers have to learn still.


> A practical language should have enough complexity management tools to handle basically any type of complexity you throw at it, [...]

In the world there's space for smaller and simpler languages too, like Lua, designed for more limited purposes. Not every language must become an universal ball of mud like C++.


> If you can achieve this, your language will be good for almost anything.

I will not believe in the single True Language, sorry, just like there isn't a single perfect way to implement dynamic arrays.

Bye,
bearophile
December 02, 2009
Hello bearophile,

> But in dynamic code you don't almost never assert that a variable is
> an int; you assert that 'a' is able to do its work where it's used. So
> 'a' can often be an int, decimal, a multiprecision long, a GMP
> multiprecision, or maybe even a float. What you care of it not what a
> is but if does what it has to, so you care if it quacks :-) That's
> duck typing.

Yes that's duck typing: "assert that 'a' is able to do its work where it's used" (function with required signature exists)

Interfaces in OOP, or type classes in Haskell are here to "assert that 'a' is intended to work where it's used" (type is some implementation of the required concept (int/long/bigint))

both have its place :)

Note that duck typing need not to be only dynamic, it can also happen at compile time - ranges in D checks if some functions are specified for "object" at compile time.


December 02, 2009
Hello dsimcha,

> My biggest gripe about static verification is that it can't help you
> at all with high-level logic/algorithmic errors, only lower level
> coding errors.  Good unit tests (and good asserts), on the other hand,
> are invaluable for finding and debugging high-level logic and
> algorithmic errors.
> 

I don't have a link or anything but I remember hearing about a study MS did about finding bugs and what they found is that every reasonably effective tool they looked at found the same amount of bugs (ok, within shouting distance, close enough that none of them could be said to be pointless) but different bugs. The way to find the most bugs is to attack it from many angle. If I can have a language that can totally prevent one class of bugs in vast swaths of code, that's a good thing, even if it does jack for another class of bugs.


December 02, 2009
Hello Leandro,


> If you say dynamic languages don't have metaprogramming capabilities,
> you just don't have any idea of what a dynamic language really is.
> 

If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:

unit carrying types: check for unit errors (adding feet to seconds) at compile time. I can be sure there are no unit error without knowing if I've executed every possible code path.

Domain specific compile time optimizations: Evaluate a O(n^3) function so I can generate O(n) code rather than write O(n^2) code. If you do that at runtime, things get slower, not faster.

Any language that doesn't have a "compile time" that is evaluated only once for all code and before the product ships, can't do these. 


December 02, 2009
== Quote from BCS (none@anon.com)'s article
> Hello dsimcha,
> > My biggest gripe about static verification is that it can't help you at all with high-level logic/algorithmic errors, only lower level coding errors.  Good unit tests (and good asserts), on the other hand, are invaluable for finding and debugging high-level logic and algorithmic errors.
> >
> I don't have a link or anything but I remember hearing about a study MS did
> about finding bugs and what they found is that every reasonably effective
> tool they looked at found the same amount of bugs (ok, within shouting distance,
> close enough that none of them could be said to be pointless) but different
> bugs. The way to find the most bugs is to attack it from many angle. If I
> can have a language that can totally prevent one class of bugs in vast swaths
> of code, that's a good thing, even if it does jack for another class of bugs.

Right, but the point I was making is that you hit diminishing returns on static verification very quickly.  If you have even very basic static verification, it will be enough to tilt the vast majority of your bugs towards high-level logic/algorithm bugs.
December 02, 2009
Hello dsimcha,

> == Quote from BCS (none@anon.com)'s article
> 
>> I don't have a link or anything but I remember hearing about a study
>> MS did
>> about finding bugs and what they found is that every reasonably
>> effective
>> tool they looked at found the same amount of bugs (ok, within
>> shouting distance,
>> close enough that none of them could be said to be pointless) but
>> different
>> bugs. The way to find the most bugs is to attack it from many angle.
>> If I
>> can have a language that can totally prevent one class of bugs in
>> vast swaths
>> of code, that's a good thing, even if it does jack for another class
>> of bugs.
>
> Right, but the point I was making is that you hit diminishing returns
> on static verification very quickly.  If you have even very basic
> static verification, it will be enough to tilt the vast majority of
> your bugs towards high-level logic/algorithm bugs.
> 

OTOH, if it's done well (doesn't get in my way) and's built into the language, any static verification is free from the end users standpoint. Heck, even it it gets in your way but only for strange cases where your hacking around, it's still useful because it tells you where the high risk code is. 


December 02, 2009
retard wrote:
> I agree some disciplines are hard to follow. For example ensuring immutability in a inherently mutable language. But TDD is something a bit easier - it's a lot higher level. It's easy to remember that you can't write any code into production code folder unless there is already code in test folder. You can verify with code coverage tools that you didn't forget to write some tests. In TDD the whole code looks different. You build it to be easily testable. It's provably a good way to write code - almost every company nowadays uses TDD and agile methods such as Scrum.

I totally agree with the value of unittests. That's why D has them built in to the language, and even has a code coverage analyzer built in so you can see how good your unit tests are.

Where you and I disagree is on the notion that unit tests are a good enough replacement for static verification. For me it's like using a sports car to tow a trailer.
December 02, 2009
BCS, el  2 de diciembre a las 17:37 me escribiste:
> Hello Leandro,
> 
> 
> >If you say dynamic languages don't have metaprogramming capabilities, you just don't have any idea of what a dynamic language really is.
> >
> 
> If you say you can do metaprogramming at runtime you just don't have any idea of what I want to do with metaprogramming. For example:

What you say next, is not metaprogramming per se, they are performance issues (that you resolve using compile-time metaprogramming). You're missing the point.

> unit carrying types: check for unit errors (adding feet to seconds) at compile time. I can be sure there are no unit error without knowing if I've executed every possible code path.

There is no compile time metaprogrammin in dynamic languages, you just can't verify anything at compile time, of course you can't do that!

Again, you are talking about performance issues, that's doable in a dynamic languages, the checks are just runned at run time.

> Domain specific compile time optimizations: Evaluate a O(n^3)
> function so I can generate O(n) code rather than write O(n^2) code.
> If you do that at runtime, things get slower, not faster.

Again *optimization*. How many times should I say that I agree that D is better than almost every dynamic languages if you need speed?

> Any language that doesn't have a "compile time" that is evaluated only once for all code and before the product ships, can't do these.

You are right, but if you *don't* need *speed*, you don't need all that stuff, that's not metaprogramming to fix a "logic" problem, they are all optimization tricks, if you don't need speed, you don't need optimization tricks.

The kind of metaprogramming I'm talking about is, for example, generating boring, repetitive boilerplate code.

-- 
Leandro Lucarella (AKA luca)                     http://llucax.com.ar/
----------------------------------------------------------------------
GPG Key: 5F5A8D05 (F8CD F9A7 BF00 5431 4145  104C 949E BFB6 5F5A 8D05)
----------------------------------------------------------------------
Es mejor probar el sabor de sapo y darse cuenta que es feo,
antes que no hacerlo y creer que es una gran gomita de pera.
	-- Dr Ricardo Vaporesso, Malta 1951
December 02, 2009
Hello Leandro,

> BCS, el  2 de diciembre a las 17:37 me escribiste:
> 
>> Hello Leandro,
>> 
>>> If you say dynamic languages don't have metaprogramming
>>> capabilities, you just don't have any idea of what a dynamic
>>> language really is.
>>> 
>> If you say you can do metaprogramming at runtime you just don't have
>> any idea of what I want to do with metaprogramming. For example:
>> 
> What you say next, is not metaprogramming per se, they are performance
> issues (that you resolve using compile-time metaprogramming). You're
> missing the point.
> 

No you're missing MY point. I was very careful to add "what I want to do with" to my statement. It might not be true for you but what I asserts is true for me. Most of the things *I* want from metaprogramming must be done as compile time metaprogramming. Saying "dynamic languages can do something at run time" doesn't imply that there is nothing more to be had by doing it at compile time.

>> unit carrying types: check for unit errors (adding feet to seconds)
>> at compile time. I can be sure there are no unit error without
>> knowing if I've executed every possible code path.
>> 
> There is no compile time metaprogrammin in dynamic languages, you just
> can't verify anything at compile time, of course you can't do that!
> 
> Again, you are talking about performance issues, that's doable in a
> dynamic languages, the checks are just runned at run time.
> 

The reason for doing the checks at compile time are not performance but correctness. I want to know a priori that the code is correct rather than wait till runtime.

>> Domain specific compile time optimizations: Evaluate a O(n^3)
>> function so I can generate O(n) code rather than write O(n^2) code.
>> If you do that at runtime, things get slower, not faster.
>> 
> Again *optimization*. How many times should I say that I agree that D
> is better than almost every dynamic languages if you need speed?

I'm not arguing on that point. What I'm arguing is that (at least for me) the primary advantages of metaprogramming are static checks (for non-perf benefits) and performance. Both of these must be done at compile time. Runtime metaprogramming just seems pointless *to me.*

>> Any language that doesn't have a "compile time" that is evaluated
>> only once for all code and before the product ships, can't do these.
>> 
> You are right, but if you *don't* need *speed*, you don't need all
> that stuff, that's not metaprogramming to fix a "logic" problem, they
> are all optimization tricks, if you don't need speed, you don't need
> optimization tricks.

Personally, I'd rater use non-metaprograming solutions where runtime solutions are viable. They are generally easier to work (from the lib authors standpoint) with and should be just as powerful. The API might be a little messier but you should be able to get just as much done with it.

> 
> The kind of metaprogramming I'm talking about is, for example,
> generating boring, repetitive boilerplate code.

For that kind of things, if I had a choice between compile time meta, run time meta and non meta, that last one I'd use is run-time meta.


December 02, 2009
Leandro Lucarella wrote:
> BCS, el  2 de diciembre a las 17:37 me escribiste:
>> Hello Leandro,
>>
>>
>>> If you say dynamic languages don't have metaprogramming capabilities,
>>> you just don't have any idea of what a dynamic language really is.
>>>
>> If you say you can do metaprogramming at runtime you just don't have
>> any idea of what I want to do with metaprogramming. For example:
> 
> What you say next, is not metaprogramming per se, they are performance
> issues (that you resolve using compile-time metaprogramming). 

They are metaprogramming tasks. Dynamic languages can do some metaprogramming tasks. They can't do those ones.

> You are right, but if you *don't* need *speed*, you don't need all that
> stuff, that's not metaprogramming to fix a "logic" problem, they are all
> optimization tricks, if you don't need speed, you don't need optimization
> tricks.

"you don't need speed" is a pretty glib statement. I think the reality is that you don't care about constant factors in speed, even if they are large (say 200 times slower is OK). But bubble-sort is probably still not acceptable.
Metaprogramming can be used to reduce big-O complexity rather than just constant-factor improvement. Lumping that in with "optimisation" is highly misleading.

> The kind of metaprogramming I'm talking about is, for example, generating
> boring, repetitive boilerplate code.