February 14, 2012
On 2/13/12 1:17 PM, Andrei Alexandrescu wrote:
> Agreed. There are two issues I see here in my opinion. First, putting
> some of our manpower in a small subset of D for tiny embedded systems is
> a misplaced investment because it would make a small impact at best.
> Second, coming up with another D-derived brand is a bad marketing move.
> We've been hurt for too long a time by D1/D2. With that in mind, if
> working on D for small embedded systems is what you like, I encourage
> you to go down that path and see what you discover.
>
> Andrei

Once again, you're correct. I have little to add to this, except to say that when I first read the topic I was rather excited, and I read on hoping to see a discussion of what exactly would be involved in stripping out a chunk of D that it might produce extremely small programs. I wonder if my desire to read these threads for educational purposes is at odds with their other functions, such as internal debates about where the language should be headed.

I don't know what to say about the D1/D2 debacle. It seems like D's extraordinary dedication to "getting it right" has had some unfortunate side effects. Perhaps it's possible to interpret D's past as a sort of "nekyia" on D's way to a more glorious future:

http://en.wikipedia.org/wiki/Nekyia
February 14, 2012
On Monday, February 13, 2012 20:41:01 Zachary Lund wrote:
> On 02/11/2012 07:00 PM, Jonathan M Davis wrote:
> > On Sunday, February 12, 2012 01:40:50 Zachary Lund wrote:
> >> Btw, I'm not very fluent in the inner workings of a garbage collector implementation but how does one go about concatenating to an array in a garbage collector as compared to manual memory management? I believe array concatenation can be done in std::vector with the insert() function just fine. Isn't it as simple as determining both array sizes and allocating enough memory for both arrays? I could be oversimplifying things...
> > 
> > Read this:
> > 
> > http://www.dsource.org/projects/dcollections/wiki/ArrayArticle
> > 
> > Appending to vectors is very different from appending to dynamic arrays because dynamic arrays do _not_ own their own memory (the GC does) and not only could other slices refer the same memory (in which case, you can't just free that memory when an array gets reallocated due to running out of space to append to), but they could already refer to the memory one past the end of the array, making it so that it can't expand into that memory.
> > 
> > Slices change the entire equation. And the way slices are designed, they require the GC.
> > 
> > - Jonathan M Davis
> 
> Right but they use the same semantics and the information to do either seems to be present. I'm not sure why an abstraction between the two cannot be made.

I'm not sure what you're saying. All dynamic arrays are slices and _none_ of them own their own memory. As such, you need a way to manage their memory. Vectors, on the other hand, own their memory and do not allow slicing at all.

- Jonathan M Davis
February 14, 2012
"Iain Buclaw" <ibuclaw@ubuntu.com> wrote in message
> I think it starts with a runtime library that is written for the given architecture in mind.  The compiler is already there in my opinion, and I have seen little reason for it not to be.

Is the compiler really ready?

Assuming that it's possible to build gdc as (for eg) an avr cross compiler,
you'll still need support for:
- 16 bit pointers
- program memory
- interrupts
- IO registers
- stripping out typeinfo/moduleinfo
- issuing errors for unsupported features
- inline assembler

If gdc can do most of this already, that's great - but I doubt it can do all of it.


February 14, 2012
"Adam D. Ruppe" <destructionator@gmail.com> wrote in message news:rdluxkzwxsxlxfgcaqle@dfeed.kimsufi.thecybershadow.net...
> On Sunday, 12 February 2012 at 17:45:46 UTC, Daniel Murphy wrote:
>> Turns out I can't help myself:
>>
>> https://github.com/yebblies/dmd/tree/microd
>
> hmmm....
>
> Do you see anything wrong with using a similar strategy to output something like C# or Java?
>
> It might be a relatively easy way to get (a subset of) D
> integrated into those environments.

C works because you can build all the high level features of D (classes, exceptions, etc) out of C and assembler.  You can build high-level features out of low-level ones, but not the other way around.  I don't know how C#/Java would handle things like value types, unions, pointers, interior pointers etc.  I'm not sure, but I think there might be some limitations in the underlying bytecode that prevents some of these things, and unlike C you can't drop down to assembler to fill in the gaps.

So I guess it depends on which features you want - you'd lose some of the low-level features, and might be unable to build the high-level features without them.


February 15, 2012
On 02/12/2012 12:32 AM, Paulo Pinto wrote:
> Am 12.02.2012 03:03, schrieb bcs:
>> On 02/11/2012 09:19 AM, Paulo Pinto wrote:
>>> Am 11.02.2012 18:00, schrieb bcs:
>>>> On 02/11/2012 12:58 AM, Paulo Pinto wrote:
>>>>> Specially since systems programming in MacOS X and Windows world is
>>>>
>>>> Systems programming in the MacOS X and Windows world isn't real systems
>>>> programming. The closest you get is kernel and driver work but even
>>>> there you have most of an OS to work with. I think the kind of systems
>>>> programming being considered is embedded work and/or things like BIOS
>>>> work.
>>>
>>>
>>> Systems programming is everything you need to get an OS up and running.
>>> At least it was so a few decades back when I attended computer science
>>> and informatics engineering course.
>>>
>>
>> OK then there may be some people doing systems programming for MacOS X
>> and Windows, but they all work for Apple and MS.
>
> So do you mean everyone doing device driver development are also working
> for them?

I've never worked on them but I seem to recall from somewhere that Windows drivers operate in an environment that has a lot of "supporting infrastructure".  Assuming that's not off in the weeds, I suspect the cases is not to different for OS-x.

>
> As well as all the companies writing services/daemons with low level
> protocols for optimal perfomance?

If you are working in user mode your claim to doing systems programming is weakened. The same goes if any part of your program can afford to use a GC.

I will grant that there is a lot of ground between that and the "very much not systems programming" type of work that goes into a things like web apps. However, I would assert that a formal D- sub-set/dialect would mostly be of use on the core kernel development and embedded-systems/micro-controllers end of things. Outside that, I suspect that >90% of the advantage can be had via a well selected style guide.
February 15, 2012
On 13.02.2012 19:17, Andrei Alexandrescu wrote:
> Second, coming up with another D-derived brand is a bad marketing move.
> We've been hurt for too long a time by D1/D2.

Andrei, can I ask you to please never mention D1 again? You seem to have _fundamental_ misconceptions about it. It's obvious that D1 was exceedingly poorly explained, to the extent that even you didn't understand it, and you've spread your misunderstandings everywhere.
And THAT has been a marketing disaster.

To try to set the record straight:

The D language has been developed continuously since the beginning.
"D1" is a stability snapshot of DMD 1.015.
"D2" is the continued development of D after 1.015.

There was no change in the rate of language development before the stability snapshot (ie, what went into D1) vs after the stability snapshot (what has gone into D2). There was no decision "1.015 is a good enough language, let's stabilize on this". It was essentially a freezing of the language development at a largely arbitrary point, for purposes of stability. Most importantly, note that "D1" was not planned. It's not a language that anyone wanted. It's just a snapshot.
And it was successful - 75% of the open bugs are D2-only.

Any mention of D1 as if it were the "first attempt" of the D language is offensive, and wrong.

Here's the original announcement of D1:
http://www.digitalmars.com/d/archives/digitalmars/D/announce/Stick_a_fork_in_it_8521.html
February 15, 2012
On 2/15/12 4:40 PM, Don wrote:
> On 13.02.2012 19:17, Andrei Alexandrescu wrote:
>> Second, coming up with another D-derived brand is a bad marketing move.
>> We've been hurt for too long a time by D1/D2.
>
> Andrei, can I ask you to please never mention D1 again? You seem to have
> _fundamental_ misconceptions about it. It's obvious that D1 was
> exceedingly poorly explained, to the extent that even you didn't
> understand it, and you've spread your misunderstandings everywhere.
> And THAT has been a marketing disaster.
>
> To try to set the record straight:
>
> The D language has been developed continuously since the beginning.
> "D1" is a stability snapshot of DMD 1.015.
> "D2" is the continued development of D after 1.015.
>
> There was no change in the rate of language development before the
> stability snapshot (ie, what went into D1) vs after the stability
> snapshot (what has gone into D2). There was no decision "1.015 is a good
> enough language, let's stabilize on this". It was essentially a freezing
> of the language development at a largely arbitrary point, for purposes
> of stability. Most importantly, note that "D1" was not planned. It's not
> a language that anyone wanted. It's just a snapshot.
> And it was successful - 75% of the open bugs are D2-only.
>
> Any mention of D1 as if it were the "first attempt" of the D language is
> offensive, and wrong.
>
> Here's the original announcement of D1:
> http://www.digitalmars.com/d/archives/digitalmars/D/announce/Stick_a_fork_in_it_8521.html

All of this is in agreement with my understanding of the situation, so I fail to see where my fundamental misconceptions would be. Is it possible that your perception of my view of D1 is inaccurate? As a simple starting point, note that none of the above contradicts, either directly or indirectly, my assertion.


Thanks,

Andrei
1 2 3 4 5 6 7 8 9
Next ›   Last »