April 29, 2012
On Sunday, 29 April 2012 at 07:00:51 UTC, foobar wrote:
> floppies, are you for real?!
> This is only relevant if you travel a decade or so back in time.
> The current generation dvd/Blu-ray discs and USB sticks aren't good enough for you?

 What? I like my floppies... or did when they worked.

 Depends on what I'm doing; I mean a meg is a lot of space if your primarily doing text and using compression. But I accept it is an old technology and won't be coming back.
April 29, 2012
On Sunday, 29 April 2012 at 08:30:15 UTC, Jonathan M Davis wrote:
> On Sunday, April 29, 2012 10:22:44 David Nadlinger wrote:
>> Yes, point taken, but what does this have to do with locality?
>
> I think that you're thinking of something completely different about the term
> locality than I meant. I meant locality in terms of how close the aa[key]
> expression was to the key in aa expression within the function - i.e. how
> local they were to each other. I may have misused the term though. I don't
> know.

Okay, then it was just a misunderstanding, I thought about memory locality.

David
April 29, 2012
On 04/29/2012 10:28 AM, SomeDude wrote:
> On Sunday, 29 April 2012 at 08:13:53 UTC, Nick Sabalausky wrote:
>>
>> There will still be sugar in the compiler so they appear to be builtins.
>> When the switch happens, I'm sure it'll be transparent - average users
>> probably won't even notice. It's just that "behind the scenes" their
>> implementation will move from DMD to Druntime.
>
> Hmmm, sounds nice, but bolting the language with the standard library is
> very risky (and a rather bad idea imho). Unless there is a very
> lightweight minimalistic core for Phobos (something which I advocate),
> you bolt a heavyweight library to your language, and that's not good.
>

druntime is the library it will be moved into, not Phobos.

April 29, 2012
On Sunday, 29 April 2012 at 08:31:57 UTC, Timon Gehr wrote:
>
> druntime is the library it will be moved into, not Phobos.

Ah, that's ok, then.

April 29, 2012
On 04/29/2012 10:20 AM, Jonathan M Davis wrote:
> On Sunday, April 29, 2012 10:07:38 David Nadlinger wrote:
>> On Saturday, 28 April 2012 at 23:51:43 UTC, Jonathan M Davis
>>
>> wrote:
>>> But even just storing it in a local variable to use later
>>> could destroy the locality enough to defeat LDC's optimization.
>>
>> Huh? I can't think of a situation where a hash table lookup would
>> entail less indirections than dereferencing a pointer stored in a
>> register (resp. the stack, depending on scheduling).
>
> If you have something like
>
> if(key in aa)
> {
>      // lots of code
>      func(aa[key]);
> }
>
> the compiler is not necessarily going to be able to determine that the AA has
> not been changed such that aa[key] can use the same lookup that key in aa did
> rather than redoing the lookup. A prime example would be
>
> if(key in aa)
> {
>      foo(aa);
>      func(aa[key]);
> }
>
> The compiler doesn't necessarily know that foo won't remove key from aa, so it
> can't save the result of key in aa to reuse rather than calling aa[key],
> whereas the programmer could know that foo doesn't do anything to aa which
> would make a pointer to the element invalid and can guarantee that only one
> lookup occurs by doing
>
> if(auto value = key in aa)
> {
>      foo(aa);
>      func(value);
> }
>
> And that's just one case where the compiler can't make such an optimization
> but the programmer can. It's _far_ better iMHO to have in return a pointer
> than to rely on the compiler removing extraneous lookups.
>
> - Jonathan M Davis

Well, what if the programmer "knows" that foo does not change 'aa', but it actually does? Then there would possibly be a segmentation fault. This implies that the 'in' operator cannot be used in @safe code. (Or there would have to be a special case, that allows 'in' if the result is directly cast to bool.)
April 29, 2012
On 04/29/2012 08:31 AM, Paulo Pinto wrote:
> Am 28.04.2012 20:47, schrieb Walter Bright:
>> Andrei and I had a fun discussion last night about this question. The
>> idea was which features in D are redundant and/or do not add significant
>> value?
>>
>> A couple already agreed upon ones are typedef and the cfloat, cdouble
>> and creal types.
>>
>> What's your list?
>
> - two different ways of creating function pointers is confusing
> (function and delegate)
>
> I understand the reasoning, but makes one think all the time when
> to use what.
>

'delegate' is more powerful, 'function' is more efficient. If you don't want to think about it, just use 'delegate'. I'd rather see 'function' implicitly convert to 'delegate' than to have it gone. D can be used for systems programming after all!

> - sometimes D code looks like template and mixins gone mad
> While I do appreciate the power, it can be quite confusing to try
> to understand what the code does. Specially with the lack of support
> in mixin's debugging
>

pragma(msg, ...) ?

>
> - misuse of enum to declare constants
> I prefer that the use of const would be possible
>

const infects the type and const-qualified data can exist at runtime, so it is not possible.

> - conditional compilation is hard to follow without syntax highlighting
> Other languages with conditional compilation make it easier to follow
> what is what. e.g. Turbo Pascal/Delphi, C#, Modula-3, Ada
>

That is not a language issue.


> While it is fun to discuss what we like and not like, I vote that
> priority should be given to make the language stable and have better
> tooling.
>
> We need to have safer languages with native code generation for systems
> programming in the mainstream OS, that take us away from the buffer
> overflow exploits and dagling pointers legacy that C and C++ brought
> upon us.
>
> Someone that does not know D and sees the amount of bugs still existing,
> or this type of discussions, will run away to Go or some future version
> of C#/Spec#/Bartok, or back to whatever he/she was using before.
>
> I don't agree D is complex, any language that aims to be used in large
> application domains, needs a certain set of abstractions. If it does not
> support them, it is condemmend to keep getting new features until it
> turns in what the language designers were fighting against.
>

I agree with this section.

April 29, 2012
On Sunday, April 29, 2012 10:37:10 Timon Gehr wrote:
> Well, what if the programmer "knows" that foo does not change 'aa', but it actually does? Then there would possibly be a segmentation fault. This implies that the 'in' operator cannot be used in @safe code. (Or there would have to be a special case, that allows 'in' if the result is directly cast to bool.)

It's exactly as safe as any iterator or range which could be invalidated - both of which can occur in safe code. Any of those could blow up in entertaining ways if you use them after they've been invalidated.

Pointers are considered safe. It's pointer arithmetic which isn't.

- Jonathan M Davis
April 29, 2012
On 04/29/2012 01:25 AM, foobar wrote:
> On Saturday, 28 April 2012 at 20:43:38 UTC, Timon Gehr wrote:
>> On 04/28/2012 09:58 PM, foobar wrote:
>>> On Saturday, 28 April 2012 at 18:48:18 UTC, Walter Bright wrote:
>>>> Andrei and I had a fun discussion last night about this question. The
>>>> idea was which features in D are redundant and/or do not add
>>>> significant value?
>>>>
>>>> A couple already agreed upon ones are typedef and the cfloat, cdouble
>>>> and creal types.
>>>>
>>>> What's your list?
>>>
>>> D has a lot of ad-hock features which make the language
>>> needlessly large and complex. I'd strive to replace these with
>>> better general purpose mechanisms.
>>>
>>> My list:
>>> * I'd start with getting rid of foreach completely. (not just
>>> foreach_reverse).
>>
>>
>> foreach is very useful. Have you actually used D?
>>
>
> I have used D and didn't claim that foreach isn't useful.
> What I said that is that it belongs in the library, NOT the language.
>

Therefore you say that it is not useful as a language feature.

>>> This is nothing more than a fancy function with
>>> a delegate parameter.
>>>
>>
>> That would be opApply.
>
> Indeed but I'd go even further by integrating it with ranges so that
> ranges would provide an opApply like method e.g.
> auto r = BinaryTree!T.preOrder(); // returns range
> r.each( (T elem) { ...use elem...}); // each method a-la Ruby
>

Well, I don't think this is better than built-in foreach (with full break and continue and goto even for user-defined opApply!)

>>
>>> * enum - enum should be completely redesigned to only implement
>>> what it's named after: enumerations.
>>>
>>
>> What is the benefit?
>
> On the one hand the current enum for manifest constants is a hack due to
> weaknesses of the toolchain

I think that is actually not true. It might have been the original motivation, but it has gone beyond that. Which weaknesses in particular? I don't think that the toolchain can be improved in any way in this regard.

> and on the other hand it doesn't provide
> properly encapsulated enums

Those could in theory be added without removing the manifest constant usage.

> such as for instance the Java 5.0 ones or
> the functional kind.
>

An algebraic data type is not an 'enumeration', so this is a moot point.

>>
>>> * version - this does not belong in a programming language. Git
>>> is a much better solution.
>>>
>>
>> So you'd maintain a git branch for every OS if there is some small
>> part that is OS-dependent? I don't think that is a better approach at
>> all.
>
> It is far better than having a pile of #ifdef styled spaghetti code.
> I'd expect to have all the OS specific code encapsulated separately anyway,
> not spread around the code base. Which is the current recommended way of
> using
> versions anyway. The inevitable conclusion would be to either use a
> version management system like git or have separate implementation
> modules for platform specific code and use the build tool to implement
> the logic of select the modules to include in the build.
>

Which projects you are aware of actually use this kind of versioning?

>>
>>> * di files - a library should encapsulate all the info required
>>> to use it. Java Jars, .Net assemblies and even old school; Pascal
>>> units all solved this long ago.
>>>
>>> * This is a big one: get rid of *all* current compile time
>>> special syntax.
>>
>> What would that be exactly?
>
> This includes __traits, templates, static ifs, etc..
>

This is what makes D useful to me.

>>
>>> It should be replaced by a standard compilation
>>> API and the compiler should be able to use plugins/addons.
>>
>> Are you serious?
>
> No I'm joking.
>
> The current system is a pile of hacks on top of the broken model of c++
> templates.
>
> I should be able to use a *very* minimalistic system to write completely
> _regular_ D code and run it at different times.

Examples in concrete syntax? How would you replace eg. string mixin functionality?


> This is a simple matter
> of separation of concerns: what we want to execute (what code) is
> separate to the concern of when we want to execute it.
>

It is not. For example, code that is only executed during CTFE does never have to behave gracefully if the input is ill-formed.


April 29, 2012
On Sunday, 29 April 2012 at 08:58:24 UTC, Timon Gehr wrote:
>>[...]
>> Indeed but I'd go even further by integrating it with ranges so that
>> ranges would provide an opApply like method e.g.
>> auto r = BinaryTree!T.preOrder(); // returns range
>> r.each( (T elem) { ...use elem...}); // each method a-la Ruby
>>
>
> Well, I don't think this is better than built-in foreach (with full break and continue and goto even for user-defined opApply!)

I think we reached a matter of taste here. How often do you use these features anyway in your regular code? I prefer a more functional style with higher order functions (map/reduce/filter/etc..) so for me foreach is about applying something to all elements and doesn't entail usage of break/continue/etc..
I'll use these constructs in a for loop but not a foreach loop.

>
>>>
>>>> * enum - enum should be completely redesigned to only implement
>>>> what it's named after: enumerations.
>>>>
>>>
>>> What is the benefit?
>>
>> On the one hand the current enum for manifest constants is a hack due to
>> weaknesses of the toolchain
>
> I think that is actually not true. It might have been the original motivation, but it has gone beyond that. Which weaknesses in particular? I don't think that the toolchain can be improved in any way in this regard.

The weakness as far as I know is about link time optimization of constants.
But regardless, my ideal implementation of so called "compile-time" features, including compile time constants, would be very different anyway.

>
>> and on the other hand it doesn't provide
>> properly encapsulated enums
>
> Those could in theory be added without removing the manifest constant usage.
>
>> such as for instance the Java 5.0 ones or
>> the functional kind.
>>
>
> An algebraic data type is not an 'enumeration', so this is a moot point.

I disagree. They are a generalization of the concept. In fact, functional languages such as ML implement c style enums as an algebraic data type.

>>[...]
>>
>> I should be able to use a *very* minimalistic system to write completely
>> _regular_ D code and run it at different times.
>
> Examples in concrete syntax? How would you replace eg. string mixin functionality?
>
>
>> This is a simple matter
>> of separation of concerns: what we want to execute (what code) is
>> separate to the concern of when we want to execute it.
>>
>
> It is not. For example, code that is only executed during CTFE does never have to behave gracefully if the input is ill-formed.

I disagree - you should make sure the input is valid or all sorts of bad things could potentially happen such as a compiler can get stuck in an infinite loop. If you only use a batch mode compiler you can simply kill the process which btw applies just the same to your user program. However, if you use an integrated compiler in your IDE that could cause me to lose part of my work if the IDE crashes.
April 29, 2012
On Saturday, 28 April 2012 at 21:12:39 UTC, SomeDude wrote:
> On Saturday, 28 April 2012 at 20:59:48 UTC, q66 wrote:
>>
>> This kind of attitude "we need big fat bullshit like Java and heavy use of OO and idioms and EH and all that other crap" is broken and false. And you have no way to prove that Python for example wouldn't scale for large projects; its main fault is that the default implementation is rather slow, but it's not pretty much missing anything required for a large project.
>
> Python has two big drawbacks for large projects:
> - it's too slow
> - it's a dynamically-typed language
>
> The fact that it's flexible is because it uses duck typing, and AFAIK you can't do duck typing in a statically typed language.
> So it's cool for small programs, but it can't handle large ones because it's not statically typed. And this opinion doesn't come just out of thin air, I speak from my own professional experience.

Go is a static duck typed language (when using interfaces anyway) AFAIK.

http://golang.org/doc/go_faq.html#implements_interface
http://research.swtch.com/interfaces

/Jonas