Thread overview
Re: Necessities for Adoption of D
Feb 11, 2008
Burton Radons
Feb 11, 2008
Jesse Phillips
Feb 11, 2008
Burton Radons
Feb 11, 2008
Jesse Phillips
Feb 11, 2008
Hans W. Uhlig
February 11, 2008
Hans W. Uhlig Wrote:

> 3) Good, native parallelism - With dual core, quad core, or in IBM's Cell  processor Obscene core processors, none of the current C Syntax family is parallelism handled natively and "well".

Do you mean in the library or in the language? I don't think D can be a magically-parallelising language; it's not constructed for it, and while it's a cool trick in those languages which do it I'd be worried about it not parallelising important code because of some functional test incorrectly failing and making it run sequentially. But it could handle something like OpenMP more elegantly, where you tell the compiler what can be parallelised, a la:

	// The compiler can and probably should split this into pieces.
	parallel foreach (i, foo)

	// The compiler can also do these two tasks at the same time.
	parallel
	{
		// Pause any other worker threads so that only this thread can work during this statement.
		synchronized
		{
		}

		// But this only causes us to pause this thread if another thread is using the object.
		synchronized (object)
		{
		}
	}

	parallel bar ();

	// But everything's gotta be done before calling this!
	baz ();

That would be a quick addition to the compiler that could be expanded over time (both in how well the compiler handles the instruction and semantics) rather than taking a month just to get it to a semi-working state, it's something that would retain its usefulness even if the compiler starts to learn how to automatically parallelise code, and it's simple (probably too simple, I haven't explored OpenMP much) while still making a material contribution to the language.

I'm working on this issue in a library, but there's a limit to how elegant I can make it. Also I have no freaking idea what I'm doing. It hasn't stopped me before, but it does slow me down.

> 6) Well supported exterainious libs - In java if you need an xml library there are 30, in perl if you need a mysql link, there is one that is regularly updates. This one takes people using the language and doing such things but its important none the less. There is no reason to reinvent the wheel 10 times when someone has done it for you. (I am well aware that many programmers will reinvent the wheel 'right' again anyway)

I half agree and half disagree. I can see how this allows outsiders to get a simple, cohesive view of a language. On the other hand, there are often manifest problems or deficiencies in a library, and variations on a theme allow us to explore different ways to address the issue and find the best way to implement it. This is particularly important with D, which has certain features which no other language has and needs implementing in new ways. It's not pointless reinvention.
February 11, 2008
On Sun, 10 Feb 2008 22:59:49 -0500, Burton Radons wrote:

> Hans W. Uhlig Wrote:
> 
>> 3) Good, native parallelism - With dual core, quad core, or in IBM's Cell  processor Obscene core processors, none of the current C Syntax family is parallelism handled natively and "well".
> 
> Do you mean in the library or in the language? I don't think D can be a magically-parallelising language; it's not constructed for it, and while it's a cool trick in those languages which do it I'd be worried about it not parallelising important code because of some functional test incorrectly failing and making it run sequentially.

Well that's what D2 is getting ready for, magically-paralleling programs. It already supports threading which is explicit. Granted its not the simplest to use, but I think paralleling will be evolving soon for D. (And by soon I mean later)
February 11, 2008
Jesse Phillips Wrote:

> On Sun, 10 Feb 2008 22:59:49 -0500, Burton Radons wrote:
> 
> > Hans W. Uhlig Wrote:
> > 
> >> 3) Good, native parallelism - With dual core, quad core, or in IBM's Cell  processor Obscene core processors, none of the current C Syntax family is parallelism handled natively and "well".
> > 
> > Do you mean in the library or in the language? I don't think D can be a magically-parallelising language; it's not constructed for it, and while it's a cool trick in those languages which do it I'd be worried about it not parallelising important code because of some functional test incorrectly failing and making it run sequentially.
> 
> Well that's what D2 is getting ready for, magically-paralleling programs. It already supports threading which is explicit. Granted its not the simplest to use, but I think paralleling will be evolving soon for D. (And by soon I mean later)

Reference? I don't use 2.0. Anyway what I said holds IMO - when you want code to be executed in parallel, it's not something that should be left to an overly-cautious safety check. For instance, it'd be extremely hard to prove that asm blocks containing SIMD code can be safely parallelised since they may or may not be utilising their own output as input. Yet that's precisely the situation in which parallelisation needs to be done most aggressively. You can also make better decisions about when an array is going to get long enough to merit splitting, while the compiler can only do this after instrumenting. And when I'm optimising code, the last thing I want is for seemingly harmless changes to suddenly cause jumps of 30% in execution speed, which is what could happen with automatic parallelisation.
February 11, 2008
Burton Radons wrote:
> Hans W. Uhlig Wrote:
> 
>> 3) Good, native parallelism - With dual core, quad core, or in
>> IBM's Cell  processor Obscene core processors, none of the current
>> C Syntax family is parallelism handled natively and "well".
> 
> Do you mean in the library or in the language? I don't think D can be
> a magically-parallelising language; it's not constructed for it, and
> while it's a cool trick in those languages which do it I'd be worried
> about it not parallelising important code because of some functional
> test incorrectly failing and making it run sequentially. But it could
> handle something like OpenMP more elegantly, where you tell the
> compiler what can be parallelised, a la:
> 
> // The compiler can and probably should split this into pieces. parallel foreach (i, foo)  // The compiler can also do these two
> tasks at the same time. parallel { // Pause any other worker threads
> so that only this thread can work during this statement. synchronized
>  { }
> 
> // But this only causes us to pause this thread if another thread is
> using the object. synchronized (object) { } }  parallel bar ();
> 
> // But everything's gotta be done before calling this! baz ();
> 
> That would be a quick addition to the compiler that could be expanded
> over time (both in how well the compiler handles the instruction and
> semantics) rather than taking a month just to get it to a
> semi-working state, it's something that would retain its usefulness
> even if the compiler starts to learn how to automatically parallelise
> code, and it's simple (probably too simple, I haven't explored OpenMP
> much) while still making a material contribution to the language.
> 
> I'm working on this issue in a library, but there's a limit to how
> elegant I can make it. Also I have no freaking idea what I'm doing.
> It hasn't stopped me before, but it does slow me down.
> 

Yes, With both where computers are going and where they already are, native parallelism is one of the prime areas where a language will have to shine.

>> 6) Well supported exterainious libs - In java if you need an xml
>> library there are 30, in perl if you need a mysql link, there is
>> one that is regularly updates. This one takes people using the
>> language and doing such things but its important none the less.
>> There is no reason to reinvent the wheel 10 times when someone has
>> done it for you. (I am well aware that many programmers will
>> reinvent the wheel 'right' again anyway)
> 
> I half agree and half disagree. I can see how this allows outsiders
> to get a simple, cohesive view of a language. On the other hand,
> there are often manifest problems or deficiencies in a library, and
> variations on a theme allow us to explore different ways to address
> the issue and find the best way to implement it. This is particularly
> important with D, which has certain features which no other language
> has and needs implementing in new ways. It's not pointless
> reinvention.

very true but when you move from hobbier to production you need a single accepted standard you can show a Pointy haired project manager and say we are using X from Y. we can rely on them supporting it for x years. Personal Development vs Corporate development have very different standards.
February 11, 2008
On Mon, 11 Feb 2008 02:07:29 -0500, Burton Radons wrote:

> Jesse Phillips Wrote:
> 
>> On Sun, 10 Feb 2008 22:59:49 -0500, Burton Radons wrote:
>> 
>> > Hans W. Uhlig Wrote:
>> > 
>> >> 3) Good, native parallelism - With dual core, quad core, or in IBM's Cell  processor Obscene core processors, none of the current C Syntax family is parallelism handled natively and "well".
>> > 
>> > Do you mean in the library or in the language? I don't think D can be a magically-parallelising language; it's not constructed for it, and while it's a cool trick in those languages which do it I'd be worried about it not parallelising important code because of some functional test incorrectly failing and making it run sequentially.
>> 
>> Well that's what D2 is getting ready for, magically-paralleling programs. It already supports threading which is explicit. Granted its not the simplest to use, but I think paralleling will be evolving soon for D. (And by soon I mean later)
> 
> Reference? I don't use 2.0. Anyway what I said holds IMO - when you want code to be executed in parallel, it's not something that should be left to an overly-cautious safety check. For instance, it'd be extremely hard to prove that asm blocks containing SIMD code can be safely parallelised since they may or may not be utilising their own output as input. Yet that's precisely the situation in which parallelisation needs to be done most aggressively. You can also make better decisions about when an array is going to get long enough to merit splitting, while the compiler can only do this after instrumenting. And when I'm optimising code, the last thing I want is for seemingly harmless changes to suddenly cause jumps of 30% in execution speed, which is what could happen with automatic parallelisation.

I don't recall the news posts that discussed such possibilities, but it relates to the fact that D is heading towards adding more functional style programming. This is where const and invariant are playing a big roll. I don't know if Walter has come out and said it will allow for automatic paralleling, but any move toward function coding lends to such possibilities.