September 25, 2008
Andrei Alexandrescu wrote:
> Lars Ivar Igesund wrote:
>> Andrei Alexandrescu wrote:
>>
>>> A slice is a range alright without any extra adaptation. It has some
>>> extra functions, e.g. ~=, that are not defined for ranges.
>>
>> Aren't slices const/readonly/whatnot and thus ~= not possible without
>> copying/allocation?
> 
> Well there's no change in semantics of slices (meaning T[]) between D1 and D2, so slices mean business as usual. Maybe you are referring to strings, aka invariant(char)[]?
> 
> Anyhow, today's ~= behaves really really erratically. I'd get rid of it if I could. Take a look at this:
> 
> import std.stdio;
> 
> void main(string args[]) {
>     auto a = new int[10];
>     a[] = 10;
>     auto b = a;
>     writeln(b);
>     a = a[1 .. 5];
>     a ~= [ 34, 345, 4324 ];
>     writeln(b);
> }
> 
> The program will print all 10s two times. But if we change a[1 .. 5] with a[0 .. 5] the behavior will be very different! a will grow "over" b, thus stomping over its content.
> 
> This is really bad because the behavior of a simple operation ~= depends on the history of the slice on the left hand side, something often extremely difficult to track, and actually impossible if the slice was received as an argument to a function.
> 
> IMHO such a faux pas is inadmissible for a modern language.
> 
> 
> Andrei

Cool, good to see this is going to be taken care of, it is a horrible wart.


-- 
Bruno Medeiros - Software Developer, MSc. in CS/E graduate
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
September 25, 2008
Bruno Medeiros wrote:
> Andrei Alexandrescu wrote:
>>
>> This is because I make next to no money so I can afford to work on basic research, which is "important" in a long-ranging way. Today's computing is quite disorganized and great energy is expended on gluing together various pieces, protocols, and interfaces. I've worked in that environment quite a lot, and dealing with glue can easily become 90% of a day's work, leaving only little time to get occupied with a real problem, such as making a computer genuinely smarter or at least more helpful towards its user. All too often we put a few widgets on a window and the actual logic driving those buttons - the "smarts", the actual "work" gets drowned by details taking care of making that logic stick to the buttons.
>>
> 
> Well, didn't you find a "real problem" right there (and also a very interesting one), in trying to make code/libraries/methodologies/tools/whatever that reduce those 90% of work in boilerplate details?
> An example could the years of investment and research in ORM frameworks (Hibernate/EJB3, Ruby on Rails, etc.), which despite ORM technology having existed for quite many years, only recently has it reached a point where it's really easy and non-tedious to write an OO-DB persistence mapping.
> Another possible example, regarding GUI programming like you mentioned, is data binding. I haven't used it myself yet, but for what they describe, it's purpose is indeed to reduce a lot of the complexity and tedium in writing code to synchronize the UI with the model/logic, and vice-versa.
> Learning and building these kinds of stuff is, IMO, the pinnacle of software engineering.

This hardly characterizes or answers my point. Of course wherever there's difficulty there's opportunity for automation, and research in software engineering is alive and well. My point was that much effort in the industry today is expended on dealing with effects instead of fighting the causes.

Andrei
September 25, 2008
Andrei Alexandrescu wrote:
> Bruno Medeiros wrote:
>> Andrei Alexandrescu wrote:
>>>
>>> This is because I make next to no money so I can afford to work on basic research, which is "important" in a long-ranging way. Today's computing is quite disorganized and great energy is expended on gluing together various pieces, protocols, and interfaces. I've worked in that environment quite a lot, and dealing with glue can easily become 90% of a day's work, leaving only little time to get occupied with a real problem, such as making a computer genuinely smarter or at least more helpful towards its user. All too often we put a few widgets on a window and the actual logic driving those buttons - the "smarts", the actual "work" gets drowned by details taking care of making that logic stick to the buttons.
>>>
>>
>> Well, didn't you find a "real problem" right there (and also a very interesting one), in trying to make code/libraries/methodologies/tools/whatever that reduce those 90% of work in boilerplate details?
>> An example could the years of investment and research in ORM frameworks (Hibernate/EJB3, Ruby on Rails, etc.), which despite ORM technology having existed for quite many years, only recently has it reached a point where it's really easy and non-tedious to write an OO-DB persistence mapping.
>> Another possible example, regarding GUI programming like you mentioned, is data binding. I haven't used it myself yet, but for what they describe, it's purpose is indeed to reduce a lot of the complexity and tedium in writing code to synchronize the UI with the model/logic, and vice-versa.
>> Learning and building these kinds of stuff is, IMO, the pinnacle of software engineering.
> 
> This hardly characterizes or answers my point. Of course wherever there's difficulty there's opportunity for automation, and research in software engineering is alive and well.

I was just pointing that things don't have to be way you described.

> My point was that much effort in the industry today is expended on dealing with effects instead of fighting the causes.
> 
> Andrei

But that's quite true nonetheless. :/


-- 
Bruno Medeiros - Software Developer, MSc. in CS/E graduate
http://www.prowiki.org/wiki4d/wiki.cgi?BrunoMedeiros#D
14 15 16 17 18 19 20 21 22 23 24
Next ›   Last »