Jump to page: 1 2
Thread overview
Maxime's micro allocation benchmark much faster ?
Mar 31, 2015
Laeeth Isharc
Mar 31, 2015
Laeeth Isharc
Mar 31, 2015
Laeeth Isharc
Mar 31, 2015
John Colvin
Mar 31, 2015
weaselcat
Mar 31, 2015
Laeeth Isharc
Apr 01, 2015
FG
Apr 01, 2015
John Colvin
Apr 01, 2015
Laeeth Isharc
Apr 01, 2015
John Colvin
Apr 01, 2015
FG
some memory allocation/GC benchmarks here - fwiw
Apr 01, 2015
Laeeth Isharc
March 31, 2015
I was curious to see if new DMD had changed speed on Maxime Chevalier-Boisvert's allocation benchmark here:

http://pointersgonewild.com/2014/10/26/circumventing-the-d-garbage-collector/

I haven't had time to look at the Phobos test suite to know if this was one of those that were included, but the difference seems to be striking.  I am using two machines in my office - both of which are old x64 boxes running Arch Linux and are quite old (8 Gb RAM only).  Same manufacturer and similar models so should be same spec CPUwise.  Have not got time to install and compare different versions of dmd on same machine, so fwiw:


1mm objects
-----------
dmd 2.07 release: 0.56 seconds
dmd 2.067-devel-639bcaa: 0.88 seconds

------------
dmd 2.07 release: between 4.44 and 6.57 seconds
dmd 2.067-devel-639bcaa: 90 seconds


In case I made a typo in code:

import std.conv;

class Node
{
	Node next;
	size_t a,b,c,d;
}

void main(string[] args)
{
	auto numNodes=to!size_t(args[1]);

	Node head=null;

	for(size_t i=0;i<numNodes;i++)
	{
		auto n=new Node();
		n.next=head;
		head=n;
	}
}
March 31, 2015
oops - scratch that.  may have made a mistake with versions and be comparing 2.067 with some unstable dev version.

On Tuesday, 31 March 2015 at 11:46:41 UTC, Laeeth Isharc wrote:
> I was curious to see if new DMD had changed speed on Maxime Chevalier-Boisvert's allocation benchmark here:
>
> http://pointersgonewild.com/2014/10/26/circumventing-the-d-garbage-collector/
>
> I haven't had time to look at the Phobos test suite to know if this was one of those that were included, but the difference seems to be striking.  I am using two machines in my office - both of which are old x64 boxes running Arch Linux and are quite old (8 Gb RAM only).  Same manufacturer and similar models so should be same spec CPUwise.  Have not got time to install and compare different versions of dmd on same machine, so fwiw:
>
>
> 1mm objects
> -----------
> dmd 2.07 release: 0.56 seconds
> dmd 2.067-devel-639bcaa: 0.88 seconds
>
> ------------
> dmd 2.07 release: between 4.44 and 6.57 seconds
> dmd 2.067-devel-639bcaa: 90 seconds
>
>
> In case I made a typo in code:
>
> import std.conv;
>
> class Node
> {
> 	Node next;
> 	size_t a,b,c,d;
> }
>
> void main(string[] args)
> {
> 	auto numNodes=to!size_t(args[1]);
>
> 	Node head=null;
>
> 	for(size_t i=0;i<numNodes;i++)
> 	{
> 		auto n=new Node();
> 		n.next=head;
> 		head=n;
> 	}
> }

March 31, 2015
Trying on a different beefier machine with 2.066 and 2.067 release versions installed:

1mm allocations:
2.066: 0.844s
2.067: 0.19s

10mm allocations

2.066: 1m 17.2 s
2.067: 0m  1.15s

So numbers were ballpark right before, and allocation on this micro-benchmark much faster.
March 31, 2015
On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:
> Trying on a different beefier machine with 2.066 and 2.067 release versions installed:
>
> 1mm allocations:
> 2.066: 0.844s
> 2.067: 0.19s
>
> 10mm allocations
>
> 2.066: 1m 17.2 s
> 2.067: 0m  1.15s
>
> So numbers were ballpark right before, and allocation on this micro-benchmark much faster.

That's nice news. The recent GC improvements are clearly working.
March 31, 2015
On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:
> Trying on a different beefier machine with 2.066 and 2.067 release versions installed:
>
> 1mm allocations:
> 2.066: 0.844s
> 2.067: 0.19s
>
> 10mm allocations
>
> 2.066: 1m 17.2 s
> 2.067: 0m  1.15s
>
> So numbers were ballpark right before, and allocation on this micro-benchmark much faster.

Wow! props to the people that worked on the GC.
March 31, 2015
On Tuesday, 31 March 2015 at 22:00:39 UTC, weaselcat wrote:
> On Tuesday, 31 March 2015 at 20:56:09 UTC, Laeeth Isharc wrote:
>> Trying on a different beefier machine with 2.066 and 2.067 release versions installed:
>>
>> 1mm allocations:
>> 2.066: 0.844s
>> 2.067: 0.19s
>>
>> 10mm allocations
>>
>> 2.066: 1m 17.2 s
>> 2.067: 0m  1.15s
>>
>> So numbers were ballpark right before, and allocation on this micro-benchmark much faster.
>
> Wow! props to the people that worked on the GC.

Yes - should have said that.  And I do appreciate very much all the hard work that has been done on this (and also by the GDC and LDC maintainers who have to keep up with each release).

Don't trust these numbers till someone else has verified them, as I am not certain I haven't messed up transliterating the code, or doing something else stoopid.  And of course it's a very specific micro benchmark, but it's one that matters beyond the direct implications given the discussion over it when her post came out.  I would be really curious to see if Maxime finds the overall performance of her JIT improved.
April 01, 2015
On 2015-03-31 at 22:56, Laeeth Isharc wrote:
> 1mm allocations
> 2.066: 0.844s
> 2.067: 0.19s

That is great news, thanks!

OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M.  :P
April 01, 2015
On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:
> On 2015-03-31 at 22:56, Laeeth Isharc wrote:
>> 1mm allocations
>> 2.066: 0.844s
>> 2.067: 0.19s
>
> That is great news, thanks!
>
> OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M.  :P

Yeah, what's with that? I've never seen it before.
April 01, 2015
On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:
> On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:
>> On 2015-03-31 at 22:56, Laeeth Isharc wrote:
>>> 1mm allocations
>>> 2.066: 0.844s
>>> 2.067: 0.19s
>>
>> That is great news, thanks!
>>
>> OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M.  :P
>
> Yeah, what's with that? I've never seen it before.

One cannot entirely escape déformation professionnelle ;)  [People mostly write 1,000 but 1mm although 1m is pedantically correct for 1,000).  Better internalize the conventions if one doesn't want to avoid expensive mistakes under pressure.
April 01, 2015
On Wednesday, 1 April 2015 at 14:22:57 UTC, Laeeth Isharc wrote:
> On Wednesday, 1 April 2015 at 10:35:05 UTC, John Colvin wrote:
>> On Wednesday, 1 April 2015 at 10:09:12 UTC, FG wrote:
>>> On 2015-03-31 at 22:56, Laeeth Isharc wrote:
>>>> 1mm allocations
>>>> 2.066: 0.844s
>>>> 2.067: 0.19s
>>>
>>> That is great news, thanks!
>>>
>>> OT: it's a nasty financier's habit to write 1M and 1MM instead of 1k and 1M.  :P
>>
>> Yeah, what's with that? I've never seen it before.
>
> One cannot entirely escape déformation professionnelle ;)  [People mostly write 1,000 but 1mm although 1m is pedantically correct for 1,000).  Better internalize the conventions if one doesn't want to avoid expensive mistakes under pressure.

well yes, who doesn't always not want to never avoid mistakes? ;)

Anyway, as I'm sure you know, the rest of the world assumes SI/metric, or binary in special cases (damn those JEDEC guys!): http://en.wikipedia.org/wiki/Template:Bit_and_byte_prefixes
« First   ‹ Prev
1 2