March 22, 2006
Anders F Björklund wrote:
> Jari-Matti Mäkelä wrote:
>> Gentoo is too "LFS"? Well, it's already possible to install Gentoo with a graphical installer, genkernel eliminates the need to hack with the kernel and package management / USE-flag editing do both have graphical frontends. The only problem is that there's no handy way to install up-to-date prebuilt binary packages compiled with custom use-flags and cflags. Debian/Slackware/Arch aren't really much "higher level".
> 
> I like Gentoo a lot, it sort of reminds me of FreeBSD which I also like. Found that doing a Gentoo install was well documented but very hands-on.
> 
> Just that marketing forces has me running distros originating from R.H. And I still think that Ubuntu and Linspire are less scary for "users" ?

I also think they have very good features for easy tasks. Still I prefer community-based distros for everyday hacking and developing bleeding-edge software.

> 
> So while I still have Gentoo installed, I don't really use it much :-( But maybe I should take another look at 2006.0, it looks much improved.

Maybe.

> 
>> I'm using Arch myself on slower PCs since portage is currently a bit
>  > too "heavy".
> 
> Yeah, sounds like "typical Python" :-) But I have never seen Arch Linux. Sounds like a nice inbetween of Slack and Gentoo, but I might be wrong ?

Portage really needs a rewrite from scratch. Maybe some CPU-intensive parts of it could be written in D! :)

Yes, Arch is a lightweight distro for experienced users. Much less features, packages and configuration scripts included, but it's fast, and "easy" to install and update. A good alternative for people who like Gentoo and ports-like package management, but don't want to spend all day compiling source packages.

-- 
Jari-Matti
March 22, 2006
Jari-Matti Mäkelä wrote:

>>Just that marketing forces has me running distros originating from R.H.
>>And I still think that Ubuntu and Linspire are less scary for "users" ?
> 
> I also think they have very good features for easy tasks. Still I prefer
> community-based distros for everyday hacking and developing
> bleeding-edge software.

Yeah, I'm using Fedora Core... (mostly since I need to support RHL/RHEL)

>>But maybe I should take another look at 2006.0, it looks much improved.
> 
> Maybe.

It's now running an emerge from like 2004 or so, could take a while. :-)

> Portage really needs a rewrite from scratch. Maybe some CPU-intensive
> parts of it could be written in D! :)

For the record, Yum sucks just as much for RPM. Hmm, Python there too...

http://rhesa.hates-software.com/2005/11/06/707b5e44.html Prefer APT-RPM.

 > Yes, Arch is a lightweight distro for experienced users. Much less
> features, packages and configuration scripts included, but it's fast,
> and "easy" to install and update. A good alternative for people who like
> Gentoo and ports-like package management, but don't want to spend all
> day compiling source packages.

Sounded like a good alternative to Slackware, for the knowledgable user.

--anders
March 22, 2006
In article <dvr64c$6sv$1@digitaldaemon.com>, kris says...
>
>Kyle Furlong wrote:
>> kris wrote:
>> 
>>> Walter Bright wrote:
>>>
>>>> "kris" <foo@bar.com> wrote in message news:4420779B.6020604@bar.com...
>>>>
>>>>> It would be more interesting if this were entitled D vs C++. After all, isn't that (as Mattias indicated) the target "competition" ?
>>>>
>>>>
>>>>
>>>> We've already had those threads in spades <g>.
>>>
>>>
>>> True <g>
>>>
>>> Did anyone mention DDL? Given that it would make D the only compiled language I've heard of with a runtime link-loader, that would seem to have some bearing?
>> 
>> 
>> Just what I was thinking. Can DDL make compile once, run everywhere possible? (assuming that is an idiom we would like to support)
>
>No, DDL does no such thing. Nor is it intended to (instead, it's deliberately machine-architecture specific).
>
>Functionality exposed by DDL is roughly the equivalent of a Java class-loader, but for pre-optimized native object-code exposing a D callable interface. This is a highly unusual attribute for native code runtime, and is (in my opinion) one of the most important assets for the D language. DDL also has the potential to support full reflection.

I couldn't have said it better myself. :)

A pleasant side-effect from all this is that it may help increase code-mobility across the windows/linux/mac divide, for any dynamic D binaries (x86 object files that comply with the D ABI) that are free from OS-specific code. Provided, that's just theory for now, but it should be possible.  While that's not "run-everywhere", it gets you far enough to make certain styles of plugin architectures very possible.

- EricAnderton at yahoo
March 22, 2006
In article <dvpq5q$1be1$1@digitaldaemon.com>, Matthias Spycher says...
>
>1. A dynamic compiler knows about the architecture of the machine, e.g. cache sizes; and the profile of the running application, e.g. I/O boundedness. The data path is likely to be the main bottleneck in coming years.
>

As someone else pointed out, there is no reason why this can't be done in static compilers - as it is now by GCC, Intel C/C++/Fortran and MS VC++ and probably several other high-perf. compilers.

>2. Languages like Java have the advantage that they don't expose the actual layout of objects in memory to the programmer. Any language with pointers has a disadvantage in the context of dynamic optimization.
>

If you're talking about the famous "pointer alias problem" then Java is certainly not immune to that (maybe even less so because of all of the references floating around).

If your talking about memory, see: http://www-128.ibm.com/developerworks/java/library/j-jtp06243.html

>3. Multicore/multithreaded systems will provide for enough computational bandwidth for dynamic compilers and GCs to run in parallel with the programs they operate on. Performance degradation due to compilation at runtime will become a moot point on server systems very soon.
>

A compiler on these systems can be extremely complex, a VM even more so. A contemporary case in point is the Itanium compiler (it's not multi-core, but supposed to operate many pipelines per clock and a big part of that is the compilers job). If anything the difference between an Itanium static compiler and Java VM's is more pronounced on these systems because of the amount of work involved in optimizing for them. Chips like IBM's Cell system will make the problem even worse, not better.

>4. The ability to run multiple applications in the same runtime context; i.e. sharing the heap, the GC and the dynamically compiled bits of code; will reduce the overhead (which today is clearly a big issue for certain applications). Such a feature will be more easily implemented for a language like Java than D.

Perhaps, but there will also be much more 'context switching' and synchronization, and there will always be an overhead involved there.


March 22, 2006
pragma wrote:
> In article <dvr64c$6sv$1@digitaldaemon.com>, kris says...
>> Kyle Furlong wrote:
>>> kris wrote:
>>>
>>>> Did anyone mention DDL? Given that it would make D the only compiled language I've heard of with a runtime link-loader, that would seem to have some bearing?
>>>
>>> Just what I was thinking. Can DDL make compile once, run everywhere possible? (assuming that is an idiom we would like to support)
>> No, DDL does no such thing. Nor is it intended to (instead, it's deliberately machine-architecture specific).
>>
>> Functionality exposed by DDL is roughly the equivalent of a Java class-loader, but for pre-optimized native object-code exposing a D callable interface. This is a highly unusual attribute for native code runtime, and is (in my opinion) one of the most important assets for the D language. DDL also has the potential to support full reflection.
> 
> I couldn't have said it better myself. :)
> 
> A pleasant side-effect from all this is that it may help increase code-mobility
> across the windows/linux/mac divide, for any dynamic D binaries (x86 object
> files that comply with the D ABI) that are free from OS-specific code.
> Provided, that's just theory for now, but it should be possible.  While that's
> not "run-everywhere", it gets you far enough to make certain styles of plugin
> architectures very possible.
> 
> - EricAnderton at yahoo

In digitalmars.com digitalmars.D:35128, Walter said of the difference in reals between Linux and Windows:

> > pragma's DDL lets you (to some extent) mix Linux and Windows .objs.
> > Eventually, we may need some way to deal with the different padding.

I think it's a pipe dream to expect to be able to mix obj files between
operating systems. The 96 bit thing is far from the only difference.

Now, he's quite knowledgeable, but I'd love to prove him wrong on this one. I find it hard to believe that it would be impossible. I guess the question is, will the subset of functionality that works be sufficient to be useful? I guess we won't know until the ELF side is working.

"Compile once, run everywhere that matters"? (Win, Linux, Intel Mac).

Exception handling (especially Windows SEH) might be a big problem, maybe a show stopper?
March 22, 2006
Don Clugston wrote:

> Now, he's quite knowledgeable, but I'd love to prove him wrong on this one. I find it hard to believe that it would be impossible. I guess the question is, will the subset of functionality that works be sufficient to be useful? I guess we won't know until the ELF side is working.

Heh, that would be cool. Hmm, drat, I'm the one supposed to do the ELF part... well, I'll get to it still. As for for cross-OS compatibility, it would have been easier if COFF still were widely used in the unix world, ELF is a totally different beast regarding datastructures and link semantics. But I will plug at it again soon now, DDL might even get some "professional" attention from me ;)
March 22, 2006
In article <dvrrca$103i$1@digitaldaemon.com>, Don Clugston says...
>
>In digitalmars.com digitalmars.D:35128, Walter said of the difference in reals between Linux and Windows:
>
> > > pragma's DDL lets you (to some extent) mix Linux and Windows .objs. Eventually, we may need some way to deal with the different padding.
>
>I think it's a pipe dream to expect to be able to mix obj files between operating systems. The 96 bit thing is far from the only difference.
>

I read Walter's remark, and it came to me like a shot from the blue.

>Now, he's quite knowledgeable, but I'd love to prove him wrong on this one. I find it hard to believe that it would be impossible. I guess the question is, will the subset of functionality that works be sufficient to be useful? I guess we won't know until the ELF side is working.
>
>"Compile once, run everywhere that matters"? (Win, Linux, Intel Mac).

Pipe dream or not, I think its worth looking into.  And you're right: the portable subset of features may be just barely usable.  Until we get some people really pounding away on this, we'll never quite know.

>Exception handling (especially Windows SEH) might be a big problem, maybe a show stopper?

I must confess: I don't know enough.  The 96/80-bit real thing is one issue, and if the D ABI doesn't specify what the exception mechanism is, then that becomes vendor/platform specific too.  Is there anything else?

I suppose I made the mistake of assuming that the D ABI was to become more encompassing than what it presently is.  My understanding was that D aimed to fix things on a binary level as well as in the sourcecode.  After all, C/C++ doesn't have a strong ABI and suffers directly because of it.  It would be nice if we knew exactly what was left up to the compiler writers and what was not - at least then one could make some solid reccomendations for this mode of development.  :(

- EricAnderton at yahoo
March 22, 2006
Dave wrote:
>> 2. Languages like Java have the advantage that they don't expose the actual layout of objects in memory to the programmer. Any language with pointers has a disadvantage in the context of dynamic optimization.
>>
> 
> If you're talking about the famous "pointer alias problem" then Java is
> certainly not immune to that (maybe even less so because of all of the
> references floating around).
>

True, but accurate garbage collection is a requirement if you're going to scale to support large, long-running applications. C-pointer functionality eliminates the potential. The D community might (in the future) consider the introduction of a managed D subset that would make accurate GC possible.

>> 3. Multicore/multithreaded systems will provide for enough computational bandwidth for dynamic compilers and GCs to run in parallel with the programs they operate on. Performance degradation due to compilation at runtime will become a moot point on server systems very soon.
>>
> 
> A compiler on these systems can be extremely complex, a VM even more so. A
> contemporary case in point is the Itanium compiler (it's not multi-core, but
> supposed to operate many pipelines per clock and a big part of that is the
> compilers job). If anything the difference between an Itanium static compiler
> and Java VM's is more pronounced on these systems because of the amount of work
> involved in optimizing for them. Chips like IBM's Cell system will make the
> problem even worse, not better.

I agree it's not easy, especially for asymmetrical multi-core processors like Cell. Time will tell. I don't believe dynamically compiled apps will consistently beat the equivalent statically compiled program. But for many apps the performance difference will probably be similar to that between an assembly program and the equivalent C/C++ implementation. And that will have to be weighed against all other factors, e.g. productivity during development, deployment costs, maintenance, etc.

Matthias
March 22, 2006
pragma wrote:
> In article <dvrrca$103i$1@digitaldaemon.com>, Don Clugston says...
>> In digitalmars.com digitalmars.D:35128, Walter said of the difference in reals between Linux and Windows:
>>
>>>> pragma's DDL lets you (to some extent) mix Linux and Windows .objs.
>>>> Eventually, we may need some way to deal with the different padding.
>> I think it's a pipe dream to expect to be able to mix obj files between
>> operating systems. The 96 bit thing is far from the only difference.
>>
> 
> I read Walter's remark, and it came to me like a shot from the blue.
> 
>> Now, he's quite knowledgeable, but I'd love to prove him wrong on this one. I find it hard to believe that it would be impossible. I guess the question is, will the subset of functionality that works be sufficient to be useful? I guess we won't know until the ELF side is working.
>>
>> "Compile once, run everywhere that matters"? (Win, Linux, Intel Mac).
> 
> Pipe dream or not, I think its worth looking into.  And you're right: the
> portable subset of features may be just barely usable.  Until we get some people
> really pounding away on this, we'll never quite know.

For what it's worth, there was a thread on comp.std.c++ recently about a standard shared library format, and someone said that library formats have recently become sufficiently similar that this is a possibility.


Sean
March 22, 2006
Matthias Spycher wrote:
> Dave wrote:
>>> 2. Languages like Java have the advantage that they don't expose the actual layout of objects in memory to the programmer. Any language with pointers has a disadvantage in the context of dynamic optimization.
>>
>> If you're talking about the famous "pointer alias problem" then Java is
>> certainly not immune to that (maybe even less so because of all of the
>> references floating around).
> 
> True, but accurate garbage collection is a requirement if you're going to scale to support large, long-running applications. C-pointer functionality eliminates the potential. The D community might (in the future) consider the introduction of a managed D subset that would make accurate GC possible.

The D standard doesn't have any language that prevents this.  I think it would be quite possible to implement an incremental GC in D if one had control over code generation.

> I agree it's not easy, especially for asymmetrical multi-core processors like Cell. Time will tell. I don't believe dynamically compiled apps will consistently beat the equivalent statically compiled program. But for many apps the performance difference will probably be similar to that between an assembly program and the equivalent C/C++ implementation. And that will have to be weighed against all other factors, e.g. productivity during development, deployment costs, maintenance, etc.

I suppose it's a good thing that there's nothing stopping someone from compiling D code to a VM target either :-)


Sean