May 10, 2015
On Sunday, 10 May 2015 at 12:43:31 UTC, Kagamin wrote:
> On Saturday, 9 May 2015 at 16:59:35 UTC, Jens Bauer wrote:
>> ... "System calls" will need to access the peripherals in some way, in order to send data to for instance a printer or harddisk. If the way it's done is using a memory location, then it's necessary to tell the compiler that this is not ordinary memory, but I/O-memory AKA hardware address space.
>
> Userland code still uses system calls and not global variables,

I think it is essential to emphasize I/O-space is not, and can not be compared to variables.
Variables reside in RAM. I/O-space is outside RAM and usually not accessible for anything but kernel and drivers.
On simple microcontrollers, there's no "user" and "supervisor" modes; thus I/O-space can be accessed from any part of the program. You could say that such microcontrollers are always running in 'supervisor mode' or 'privileged mode'.

> whatever is expressed in read(2) signature tells the compiler enough to pass data via buffer.

Yes. If we take 'read' as an example, the system call takes your data-block and at some point it transfers your data block to the 'driver'. The driver receives the data, but where does it put the data, in order to write the data to your harddisk ?
On some systems, it writes to a harddisk controller, which resides in I/O-space. This harddisk controller is not software, it's hardware. It means the data written to I/O-space, AKA. Hardware registers, go directly out onto the PCB traces and head for an external chip outside the CPU; the chip is a bridge chip, which is only the middle-man. The data is passed on to another chip connected to the bridge, and this other chip will then see "oh, it's a command, that I should move the arm that holds the harddisk's read/write head"; it also receives a position. This command may be 2 bytes in size. A series of such commands are necessary before the actual data can begin to be transferred. When the harddisk head is in the right position and all the other preparations have been made, the CPU can start transferring the data, which is held in the buffer in RAM. It may transfer the data byte-by-byte, until all data have been transferred.

I'm sorry for such a tedious old-fashioned example, but it really explains it the best way.
Today, we have DMA; we set a pointer and a length, and give the command "Begin", and the data is transferred automatically, so the CPU is actually free to do other things while our transfer is being done in the background - but the basics are the same. To tell the DMA where to start transferring from and how many bytes to transfer and trigger the transfer, one has to write to I/O-space (on most systems).

Variables are not at all behaving like peripherals, because they reside in normal RAM (well, usually they reside in RAM, especially on computers).

Imagine you write value 0xA5 to address 0xFFFF3840.
If you have all interrupts disabled and read the value immediately after that, what do you expect to read ?
Yes, of course, you expect to read 0xA5. But you will never read that value, because the hardware always reads this particular I/O location as 0x13.
Now, your written value, somehow is readable in address 0xFFFF382C. So whatever you write in 0xFFFF3840 is immediately readable in address 0xFFFF382C.
If this is confusing, then wait until you hear this: Some peripherals require you to *read* an address in order to clear an interrupt-pending bit. As soon as you read it, the pending bit will be cleared and you can have another interrupt, but not before that happens.

This might sound wierd, but how about this: In another location on the same chip, you'll find that you need to write a one, in order to clear a bit in a register. However, not all bits act that way in this 32-bit register; some are 'sticky', some bits are read-only and some are write-one-to-clear.

Alright, alright, we're crazy enough now, aren't we ?
-NO! :) Another kind of peripheral resets a counter, whenever you write *any* value to a hardware register. It does not matter which value; pick one, then the address you write to will be cleared to 0. Reading it immediately after will either give you the result 0 or perhaps 1 if it already started counting.

... That means I/O-space isn't about atomic operations nor is it only about writing the values in the right order, but it may be completely nuts and twist your brain. :)

Thus ... Having this kind of space, which is addressable the same way RAM is addressable, but behaves dramatically different, one needs to be able to tell the compiler that "this is not RAM and cannot behave as RAM". C does this by using the 'volatile' keyword, which is often just used to share variables between task-time and interrupt-time. But what volatile is really about is to tell the compiler that ...
1: It's not allowed to do *any* caching of the value
2: It can not predict which value the address may contain.
3: It's not allowed to move code from one side of the access to such address to the other side of the access (eg. it's not allowed to move an instruction from before the access to after the access or vice versa).
4: Just in case I forgot something, put it here. ;)

(I'm sorry for the long explanation, I hope it wasn't too boring).

>>> Shared is supposed to prevent the programmer from accidentally putting unshared data in a shared context. Expectedly people wanted it to be a silver bullet for concurrency, instead std.concurrency provides high-level concurrency safety.
>>
>> In other words, it's the oposite of 'static' ?
>
> Whether data is shared or not is not tied to its storage class, that's why its shared nature is expressed in its type and storage class can be anything; for the same reason shared type qualifier is transitive.

This helps me a lot in understanding the nature of 'shared'.
Thank you for providing these details. :)
1 2
Next ›   Last »