| |
 | Posted by John | Permalink Reply |
|
John 
| Hi,
I'm currently reading "Programming in D" and in order to get accustomed to D and it's syntax, I've decided to implement (actually port) a simple (and naive) Scheme interpreter from C to D. The original interpreter (in C) is described in a series of posts here:
http://peter.michaux.ca/articles/scheme-from-scratch-introduction
I've followed the tutorials and implemented various versions of the interpreter in different languages, and now I've decided to make a port to D. My source code resides in:
https://bitbucket.org/jfourkiotis/deimos
The basic object in my (naive, I repeat again) implementation, is the following:
alias Object = Algebraic!(
long , /* numbers */
bool , /* #t or #f */
char , /* characters*/
string , /* symbols */
char[] , /* strings */
EmptyList , /* Nil */
ConsCell ,
void function(This*, This*), /* primitive procedures */
CompoundProc); /* compound procedures */
where the following type definitions hold:
struct EmptyList {}
class ConsCell {
Object car;
Object cdr;
this(Object car, Object cdr)
{
this.car = car;
this.cdr = cdr;
}
}
So, I have the following questions:
* For this kind of implementation, is the Algebraic type a good choice ? Is a simple union perhaps better ?
* I've defined the (recursive) Fibonacci function, for which DMD takes 30sec to calculate Fibonacci(30) and LDC takes 10sec. Is this a reasonable difference between the two compilers?
* I find it very difficult (actually impossible) to profile code in Mac OS X. There is no output for options -profile. Are there any other profiling/debugging tools for the Mac OS X ? My other ports (C++, Scala) run interpret the same example in under 2sec, so I would like to detect where my bottlenecks are.
Thanks.
ps: Any advice to make my code "better" (and more D-compliant) are appreciated.
|