June 15, 2010
On Tue, 15 Jun 2010, Walter Bright wrote:

> strtr wrote:
> > It's the optimization :)
> > Without -O compilation took only a few seconds!
> 
> Well, that explains it! Little attempt is made in the optimizer to make it compile faster if that would interfere with generating faster code.

Chances are that the changes to allow more inlining contribute to handing the optimizer more work to chew on too.
June 15, 2010
Brad Roberts wrote:
> On Tue, 15 Jun 2010, Walter Bright wrote:
> 
>> strtr wrote:
>>> It's the optimization :)
>>> Without -O compilation took only a few seconds!
>> Well, that explains it! Little attempt is made in the optimizer to make it
>> compile faster if that would interfere with generating faster code.
> 
> Chances are that the changes to allow more inlining contribute to handing the optimizer more work to chew on too.

You're very likely right.
June 15, 2010
Hello Walter,

> strtr wrote:
> 
>> It's the optimization :)
>> Without -O compilation took only a few seconds!
> Well, that explains it! Little attempt is made in the optimizer to
> make it compile faster if that would interfere with generating faster
> code.
> 

How does 1.061 w/ -O compare to 1.062 w/ -O? If 62 is much slower that might be of interest.

-- 
... <IXOYE><



June 15, 2010
== Quote from BCS (none@anon.com)'s article
> Hello Walter,
> > strtr wrote:
> >
> >> It's the optimization :)
> >> Without -O compilation took only a few seconds!
> > Well, that explains it! Little attempt is made in the optimizer to make it compile faster if that would interfere with generating faster code.
> >
> How does 1.061 w/ -O compare to 1.062 w/ -O? If 62 is much slower that might be of interest.

That is exactly what I mentioned :?
Or did you mean w/o ?
In that case I don't much care it takes 5 or 10 seconds for thousands of lines of
code :)
(or, I don't know how to time dmd more precise than with my stopwatch when called
through bud :D)

Anyway, I kind of like that it takes longer now with optimization.
I haven't checked whether things actually got any faster, but it feels a bit like
in older games when it said:
"Optimizing X for your computer" or "Generating A.I."
:)
June 16, 2010
On 6/15/2010 5:58 AM, Jacob Carlborg wrote:
> On 2010-06-14 04:10, Eric Poggel wrote:
>> On 6/13/2010 9:30 AM, Lutger wrote:
>>> Great, thank you!
>>>
>>> I noticed both std.concurrency and std.json are not (yet?) included in
>>> the documentation. Does that have any bearing on their status, are
>>> they usable and / or stable?
>>>
>>> There are some other modules without documentation like std.openrj and
>>> std.perf. Is there a page somewhere that documents their fate? I could
>>> only find this one:
>>>
>>> http://www.wikiservice.at/wiki4d/wiki.cgi?LanguageDevel
>>
>> Speaking of std.json, has anyone looked at the Orange library on
>> dsource? http://www.dsource.org/projects/orange/
>>
>> I haven't used it (yet), but it looks to support a back-end
>> serialization engine that supports different front-ends, with xml
>> currently being implemented. It's also Boost licensed.
>
> I would but it the other way around, a serialization front end with
> support for different back ends (archive types). I hope to add support
> for Phobos soon.
>
Maybe I'm calling the front-end the back-end.  Orange provides a reflection/serialization engine and allows for plugable serialization types--xml being the only one implemented so far.
June 16, 2010
On 6/15/2010 7:58 PM, strtr wrote:
> == Quote from BCS (none@anon.com)'s article
>> Hello Walter,
>>> strtr wrote:
>>>
>>>> It's the optimization :)
>>>> Without -O compilation took only a few seconds!
>>> Well, that explains it! Little attempt is made in the optimizer to
>>> make it compile faster if that would interfere with generating faster
>>> code.
>>>
>> How does 1.061 w/ -O compare to 1.062 w/ -O? If 62 is much slower that might
>> be of interest.
>
> That is exactly what I mentioned :?
> Or did you mean w/o ?
> In that case I don't much care it takes 5 or 10 seconds for thousands of lines of
> code :)
> (or, I don't know how to time dmd more precise than with my stopwatch when called
> through bud :D)
>
> Anyway, I kind of like that it takes longer now with optimization.
> I haven't checked whether things actually got any faster, but it feels a bit like
> in older games when it said:
> "Optimizing X for your computer" or "Generating A.I."
> :)

"Reticulating Splines" was always my favorite.
June 16, 2010
== Quote from Eric Poggel (dnewsgroup@yage3d.net)'s article
> On 6/15/2010 5:58 AM, Jacob Carlborg wrote:
> > On 2010-06-14 04:10, Eric Poggel wrote:
> >> On 6/13/2010 9:30 AM, Lutger wrote:
> >>> Great, thank you!
> >>>
> >>> I noticed both std.concurrency and std.json are not (yet?) included in the documentation. Does that have any bearing on their status, are they usable and / or stable?
> >>>
> >>> There are some other modules without documentation like std.openrj and std.perf. Is there a page somewhere that documents their fate? I could only find this one:
> >>>
> >>> http://www.wikiservice.at/wiki4d/wiki.cgi?LanguageDevel
> >>
> >> Speaking of std.json, has anyone looked at the Orange library on dsource? http://www.dsource.org/projects/orange/
> >>
> >> I haven't used it (yet), but it looks to support a back-end serialization engine that supports different front-ends, with xml currently being implemented. It's also Boost licensed.
> >
> > I would but it the other way around, a serialization front end with support for different back ends (archive types). I hope to add support for Phobos soon.
> >
> Maybe I'm calling the front-end the back-end.  Orange provides a reflection/serialization engine and allows for plugable serialization types--xml being the only one implemented so far.

Yes, regardless of what the parts are called you description is correct. I'm working now on creating an XML document abstraction layer so I can support both Tango and Phobos.

(replying using the Web interface, this particular post didn't have any content in
my news reader)

/Jacob Carlborg
1 2 3
Next ›   Last »