October 27, 2013
The sub-text here is that, D should be one of the main languages on A Raspberry Pi.

Currently children start with Scratch, move to Python and then (most
likely) to C.

Oracle have made a huge push to ensure Java is mainstream on the Raspberry Pi.

I would much prefer to have D or Go as the "Don't go to C or Java" option.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

October 27, 2013
The sub-text here is that, D should be one of the main languages on A Raspberry Pi.

Currently children start with Scratch, move to Python and then (most
likely) to C.

Oracle have made a huge push to ensure Java is mainstream on the Raspberry Pi.

I would much prefer to have D or Go as the "Don't go to C or Java" option.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

October 27, 2013
On Sat, 2013-10-26 at 16:48 +0200, eles wrote:
[…]
> OOH you show me it is not a big deal, OTOH you make a big deal from it refusing every support inside the compiler or the standard library.

I am assuming that the C++ memory model and it's definition of volatile has in some way made the problem go away and expressions such as:

	device->csw.ready

can be constructed such that there is no caching of values and the entity is always read. Given the issues of out of order execution, compiler optimization and multicore, what is their solution?

(The above is a genuine question rather than a troll. Last time I was writing UNIX device drivers seriously was 30+ years ago, in C, and the last embedded systems work was 10 years ago using C with specialist compilers – 8051, AVR chips and the like. I would love to be able to work with the GPIO on a Raspberry Pi with D, it would get me back into all that fun stuff. I am staying away as it looks like a return to C is the only viable just now, unless I learn C++ again.)

> Should I one day define my own "int plus(int a, int b) { return a+b; }"?

Surely,

	a + b

always transforms to

	a.__add__(b)

in all quality languages (*) so that you can redefine the meaning from
the default.


(*) which rules out Java.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

October 27, 2013
On Sat, 2013-10-26 at 14:49 +0200, Timo Sintonen wrote: […]
> A little bit sad that the honored leader of the language still thinks that the right way to go is what we did with Commodore 64...

Not a good style of argument, since the way of the Commodore 64 might be a good one. It isn't, but it might have been.

The core problem with peek and poke for writing device drivers is that hardware controllers do not just use byte structured memory for things, they use bit structures.

So for data I/O,

	device->buffer = value
	value = device->buffer

can be replaced easily with:

	poke(device->buffer, value)
	value = peek(device->buffer)

but this doesn't work when you are using bitfields, you end up having to do all the ugly bit mask manipulation explicitly. Thus, what the equivalent of:

	device->csw.enable = 1
	status = device->csw.ready

is, is left to the imagination.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder


-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

October 27, 2013
On 10/27/2013 1:31 AM, Russel Winder wrote:
> The core problem with peek and poke for writing device drivers is that
> hardware controllers do not just use byte structured memory for things,
> they use bit structures.
>
> So for data I/O,
>
> 	device->buffer = value
> 	value = device->buffer
>
> can be replaced easily with:
>
> 	poke(device->buffer, value)
> 	value = peek(device->buffer)
>
> but this doesn't work when you are using bitfields, you end up having to
> do all the ugly bit mask manipulation explicitly. Thus, what the
> equivalent of:
>
> 	device->csw.enable = 1
> 	status = device->csw.ready
>
> is, is left to the imagination.

Bitfield code generation for C compilers has generally been rather crappy. If you wanted performant code, you always had to do the masking yourself.

I've written device drivers, and have designed, built, and programmed single board computers. I've never found dealing with the oddities of memory mapped I/O and bit flags to be of any difficulty.

Do you really find & and | operations to be ugly? I don't find them any uglier than + and *. Maybe that's because of my hardware background.
October 27, 2013
27-Oct-2013 13:09, Russel Winder пишет:
> The sub-text here is that, D should be one of the main languages on A
> Raspberry Pi.

s/Raspberry Pi/ARM boards/

After all Rasp Pi is only one of many - a tiny piece of outdated ARM.

>
> Currently children start with Scratch, move to Python and then (most
> likely) to C.
>
> Oracle have made a huge push to ensure Java is mainstream on the
> Raspberry Pi.
>
> I would much prefer to have D or Go as the "Don't go to C or Java"
> option.
>
+1

-- 
Dmitry Olshansky
October 27, 2013
On Saturday, 26 October 2013 at 11:43:02 UTC, Johannes Pfau wrote:
> Well to be honest I don't think there's any kind of spec related to
> shared. This is still a very unspecified / fragile part of the language.
>
> (I totally agree though that it should be specified)

I agree, and thus I think it's dangerous at best and harmful at worst to make any recommendations to use shared for anything but a mere type tag (with no intrinsic meaning) at the moment.

LDC certainly does not ascribe any special meaning to shared variables, and last time I checked, DMD didn't make any of the guarantees discussed here either.

David
October 28, 2013
On Sun, 2013-10-27 at 02:12 -0700, Walter Bright wrote: […]
> Bitfield code generation for C compilers has generally been rather crappy. If you wanted performant code, you always had to do the masking yourself.

Endianism and packing have always been the bête noir of bitfields due to it not being part of the standard but left as compiler specific – sort of essentially in a way due to the vast difference in targets. Given a single compiler for a given target I never found the generated code poor. Using the UNIX compiler in early 1980s and the AVR compiler suites we used in the 2000s generated code always seemed fine. What's your evidence for hand crafted code being better than compiler generated code?

> I've written device drivers, and have designed, built, and programmed single board computers. I've never found dealing with the oddities of memory mapped I/O and bit flags to be of any difficulty.

But don't you find:

	*x = (1 << 7) & (1 << 9)

to lead directly to the use of macros:

	SET_SOMETHING_READY(x)

to hide the lack of immediacy of comprehension of the purpose of the expression?

> Do you really find & and | operations to be ugly? I don't find them any uglier than + and *. Maybe that's because of my hardware background.

It's not the operations that are the problem, it is the expressions using them that lead to code that is the antithesis of self-documenting. Almost all code using <<, >>, & and | invariable ends up being replaced with macros in C and C++ so as to avoid using functions.

The core point here is that this sort of code fails as soon as a function call is involved, functions cannot be used as a tool of abstraction. At least with C and C++.

Clearly D has a USP over C and C++ here in that macros can be replaced by CTFE. But how to guarantee that a function is fully evaluated at compile time and not allowed to generate a function call. Only then can functions be used instead of macros to make such code self documenting.

Much better to have a bitfield system that works. Especially on architectures such as AVR where there are areas of bit addressable memory. Although Intel only have words accessible memory, not all architectures do.

C (and thus C++) hacked a solution that worked fine for the one compiler
with the PDP and VAX targets. It was only when there were multiple
compilers and multiple targets that the problem arose. There is nothing
really wrong with the C bitfield syntax it was just that different
compilers did different things for the same target.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

October 28, 2013
On Fri, 2013-10-25 at 13:04 -0700, Walter Bright wrote: […]
> I've written device drivers and embedded systems. The quantity of code that deals with memory-mapped I/O is a very, very small part of those programs. The subset of that code that needs to exactly control the read and write cycles is tinier still. (For example, when writing to a memory-mapped video buffer, such control is quite unnecessary.)
> 
> Any of the methods I presented are not a significant burden.
> 
> Adding two lines of inline assembler to get exactly what you want isn't hard, and you can hide it behind a mixin if you like.
> 
> And, of course, you'll still need inline assembler to deal with the other system-type operations needed for embedded systems work. For example, setting up the program stack, setting the segment registers, etc. No language provides support for them outside of inline assembler or assembler intrinsics.

My experience, admittedly late 1970s, early 1980s then early 2000s concurs with yours that only a small amount of code requires this read and write behaviour, but where it is needed it is crucial and in areas where every picosecond matters (*). I disagree with your point about memory video buffers as a general statement, it depends on the buffering and refresh strategy of the buffer. Some frame buffers are very picky and so exact read and write behaviour of the code is needed. Less so now though fortunately.

Using functions is a burden here if it involves a function call, only macros are feasible as units of abstraction. Moreover this is the classic approach to inline assembler some form of macro so as to create a comprehensible abstraction.

The problem with inline assembler is that you need versions for every target architecture making it a source code and build nightmare. OK there are directory hierarchy idioms and build idioms that make it easier (**), but inline assembler should only really be an answer in cases where there are hardware instructions on a given target that it cannot reasonable be expected that the compiler can generate from the source code. Classics here are the elliptic function libraries, and the context switch operations.

So the issue is not the approach per se but how that is encoded in the source code to make it readable and comprehensible AND performant.

Volatile as a variable modifier always worked for me in the past but it got bad press and all compiler writers ignored it as a feature till it became useless. Perhaps it is time to reclaim volatile for D give it a memory barrier semantic so that there can be no instruction reordering around the read and write operations, and make it a tool for those who need it. After all no-one is actually using for anything just now are they?


(*) OK a small exaggeration in late 1970s where the time scale was 18ms,
but you get my point.

(**) Actually it is much easier to do with build tools such as SCons and
Waf than it ever was with Make, and the GNU "Auto" tools (especially on
Windows), and even CMake.

-- 
Russel. ============================================================================= Dr Russel Winder      t: +44 20 7585 2200   voip: sip:russel.winder@ekiga.net 41 Buckmaster Road    m: +44 7770 465 077   xmpp: russel@winder.org.uk London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

October 28, 2013
On 10/28/2013 1:13 AM, Russel Winder wrote:
> My experience, admittedly late 1970s, early 1980s then early 2000s
> concurs with yours that only a small amount of code requires this read
> and write behaviour, but where it is needed it is crucial and in areas
> where every picosecond matters (*). I disagree with your point about
> memory video buffers as a general statement, it depends on the buffering
> and refresh strategy of the buffer. Some frame buffers are very picky
> and so exact read and write behaviour of the code is needed. Less so now
> though fortunately.

I've not only built my own single board computers with video buffers, but I've written code for several graphics boards back in the 80's. None needed exact read/write behavior.

> Using functions is a burden here if it involves a function call, only
> macros are feasible as units of abstraction. Moreover this is the
> classic approach to inline assembler some form of macro so as to create
> a comprehensible abstraction.

If you want every picosecond, you're really best off writing a few lines of inline asm. Then you can craft exactly what you need.


> The problem with inline assembler is that you need versions for every
> target architecture making it a source code and build nightmare.

When you're writing code for memory-mapped I/O, it is NOT going to be portable, pretty much by definition! (Are there any two different target architectures with exactly the same memory-mapped I/O stuff?)


> OK there are directory hierarchy idioms and build idioms that make it
> easier (**), but inline assembler should only really be an answer in
> cases where there are hardware instructions on a given target that it
> cannot reasonable be expected that the compiler can generate from the
> source code. Classics here are the elliptic function libraries, and the
> context switch operations.
>
> So the issue is not the approach per se but how that is encoded in the
> source code to make it readable and comprehensible AND performant.
>
> Volatile as a variable modifier always worked for me in the past but it
> got bad press and all compiler writers ignored it as a feature till it
> became useless. Perhaps it is time to reclaim volatile for D give it a
> memory barrier semantic so that there can be no instruction reordering
> around the read and write operations, and make it a tool for those who
> need it. After all no-one is actually using for anything just now are
> they?

Ask any two people, even ones in this thread, what "volatile" means, and you'll get two different answers. Note that the issues of reordering, caching, cycles, and memory barriers are separate and distinct issues. Those issues also vary dramatically from one architecture to the next.

(For example, what really happens with a+=1 ? Should it generate an INC, or an ADD, or a MOV/ADD/MOV triple for MMIO? Where do the barriers go? Do you even need barriers? Should a LOCK prefix be emitted? How is the compiler supposed to know just how the MMIO works on some particular computer board?)