www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - Optimization ???

reply "Mattdef" <mattdef gmail.com> writes:
Hi





module hello;

import std.stdio;
import std.datetime;
import std.string;
import std.conv;

int main(string[] argv)
{
     writeln("Tape \"Y\" to launch benchmark or any other touch to 
exit program :");
	string s = chomp(readln());

	while(s == "y" || s == "Y")
	{
		auto bench = benchmark!(Benchmark)(1);
		writefln("Execution time : %s ms", bench[0].msecs);

		s = chomp(readln());
	}

     return 0;
}

void Benchmark()
{
	uint count = 0;
	student michael = null;

	while (count < 1_000_000)
	{
		michael = new student("Michael", Date(1998, 5, 1), 12);
		michael.setName("Joseph" ~ to!string(count));
		count++;
	}
	writeln(michael.getState());
}

class student
{
	private:
	string _name;
	Date _birthday;
	int _evaluation;

	public:
	string getState()
	{
		return _name ~ "'s birthday " ~ _birthday.toSimpleString() ~ " 
and his evaluation is " ~ to!string(_evaluation);
	}

	this(string name, Date birthday, int eval)
	{
		_name = name;
		_birthday = birthday;
		_evaluation = eval;
	}

	void setName(string name)
	{
		_name = name;
	}

	void setBirthday(Date birthday)
	{
		_birthday = birthday;
	}

	void setEvaluation(int eval)
	{
		_evaluation = eval;
	}
}
Feb 20 2014
next sibling parent reply "bearophile" <bearophileHUGS lycos.com> writes:
Mattdef:


By "code longer" I assume you mean its run time. The answers could be multiple, like you using DMD instead of LDC2/GDC, or you using the wrong compilation switches, or perhaps because Microsoft has poured on the Dotnet ten thousands times more money compared to DMD. Or perhaps your code is just not good enough for D, who knows? Why don't you profile the code and look at the machine language for possible problems? I don't have time now to do the optimization for you now, sorry. Bye, bearophile
Feb 20 2014
next sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
 I don't have time now to do the optimization for you now, sorry.
I have improved your code a little, but I don't know the http://dpaste.dzfl.pl/0dab53bf85ad I compile and run it with ldc2 with: ldmd2 -wi -O -release -inline -noboundscheck -run test.d Bye, bearophile
Feb 20 2014
prev sibling parent "bearophile" <bearophileHUGS lycos.com> writes:
 I don't have time now to do the optimization for you now, sorry.
I have improved your code a little, but I don't know the http://dpaste.dzfl.pl/0dab53bf85ad I compile and run it with ldc2 with: ldmd2 -wi -O -release -inline -noboundscheck -run test.d ------------------------ A second version uses a struct: struct Student { string name; Date birthday; int evaluation; string getState() { return name ~ "'s birthday " ~ birthday.toSimpleString ~ " and his evaluation is " ~ evaluation.text; } } void bench() { Student* michael; foreach (immutable count; 0 .. 1_000_000) { michael = new Student("Michael", Date(1998, 5, 1), 12); michael.name = "Joseph" ~ count.text; } michael.getState.writeln; } A third version allocates the struct on the stack: void bench() { Student michael; foreach (immutable count; 0 .. 1_000_000) { michael = Student("Michael", Date(1998, 5, 1), 12); michael.name = "Joseph" ~ count.text; } michael.getState.writeln; } The timings I'm seeing: Joseph999999's birthday 1998-May-01 and his evaluation is 12 Execution time : 651 ms Joseph999999's birthday 1998-May-01 and his evaluation is 12 Execution time : 563 ms Joseph999999's birthday 1998-May-01 and his evaluation is 12 Execution time: 440 ms. Bye, bearophile
Feb 20 2014
prev sibling parent reply "Jesse Phillips" <Jesse.K.Phillips+D gmail.com> writes:
On Thursday, 20 February 2014 at 23:31:36 UTC, Mattdef wrote:
 Hi


Running rdmd -profile test.d it seems that to!() is eating most of the time. Removing this brings it down from 530ms to 154ms Num Tree Func Per Calls Time Time Call 1000000 229304911 229079249 229 immutable(char)[] std.conv.toImpl!(immutable(char)[], uint).toImpl(uint, uint, std.ascii.LetterCase).toStringRadixConvert!(12uL, 10).toStringRadixConvert(uint) 1000000 263228452 33923541 33 pure trusted immutable(char)[] std.conv.toImpl!(immutable(char)[], uint).toImpl(uint, uint, std.ascii.LetterCase) 1000000 295084102 31855649 31 pure safe immutable(char)[] std.conv.toImpl!(immutable(char)[], uint).toImpl(uint) 1000000 322073672 26989570 26 pure safe immutable(char)[] std.conv.to!(immutable(char)[]).to!(uint).to(uint)
Feb 20 2014
parent reply "Mattdef" <mattdef gmail.com> writes:
Thanks for yours replies.



conversions ?

(sorry for my english)
Feb 21 2014
parent reply "John Colvin" <john.loughran.colvin gmail.com> writes:
On Friday, 21 February 2014 at 09:29:42 UTC, Mattdef wrote:
 Thanks for yours replies.

 I know it is the conversion of uint that is the problem but my 


 conversions ?

 (sorry for my english)
least some cases. In particular, making lots of small strings is very garbage heavy and the D garbage collector isn't as sophisticated as the one in
Feb 21 2014
parent reply Orvid King <blah38621 gmail.com> writes:

because of the GC. By default with MS.Net, and Mono (when compiled
with sgen) an allocation is almost literally just a bump-the-pointer,
with an occasional scan (no compaction for this code) and collection
of the 64kb (on MS.Net it actually the size of your CPU's L1 cache)
gen0 heap. This particular code is unlikely to trigger a collection of
gen1 (L2 cache when on MS.Net), gen2 or the large object heap. In D
however, the allocations are significantly more expensive.

On 2/21/14, John Colvin <john.loughran.colvin gmail.com> wrote:
 On Friday, 21 February 2014 at 09:29:42 UTC, Mattdef wrote:
 Thanks for yours replies.

 I know it is the conversion of uint that is the problem but my


 conversions ?

 (sorry for my english)
least some cases. In particular, making lots of small strings is very garbage heavy and the D garbage collector isn't as sophisticated as the one in
Feb 21 2014
parent reply "Mattdef" <mattdef gmail.com> writes:
On Friday, 21 February 2014 at 13:39:08 UTC, Orvid King wrote:

 because of the GC. By default with MS.Net, and Mono (when 
 compiled
 with sgen) an allocation is almost literally just a 
 bump-the-pointer,
 with an occasional scan (no compaction for this code) and 
 collection
 of the 64kb (on MS.Net it actually the size of your CPU's L1 
 cache)
 gen0 heap. This particular code is unlikely to trigger a 
 collection of
 gen1 (L2 cache when on MS.Net), gen2 or the large object heap. 
 In D
 however, the allocations are significantly more expensive.

 On 2/21/14, John Colvin <john.loughran.colvin gmail.com> wrote:
 On Friday, 21 February 2014 at 09:29:42 UTC, Mattdef wrote:
 Thanks for yours replies.

 I know it is the conversion of uint that is the problem but my


 string
 conversions ?

 (sorry for my english)
least some cases. In particular, making lots of small strings is very garbage heavy and the D garbage collector isn't as sophisticated as the one in
Thanks for yours answers !
Feb 21 2014
parent "rumbu" <rumbu rumbu.ro> writes:
D version of to!string(uint):

size_t index = 12;
char[12] buffer = void;
uint div = void;
uint mod = void;
char baseChar = 'A';
do {
    div = cast(uint)(mValue / 10);
    mod = mValue % 10 + '0';
    buffer[--index] = cast(char)mod;
    mValue = div;
} while (mValue);
return cast(string)buffer[index .. $].dup;


C++):

//p is a reusable buffer;
wchar_t* COMNumber::Int32ToDecChars(wchar_t* p, unsigned int 
value, int digits)
{
     while (--digits >= 0 || value != 0) {
         *--p = value % 10 + '0';
         value /= 10;
     }
     return p;
}

string.
Feb 21 2014