January 22, 2013
Am 22.01.2013 19:33, schrieb Freddie Chopin:
> On Tuesday, 22 January 2013 at 18:14:30 UTC, Paulo Pinto wrote:
> ... [cutted]
>> Faire enough, I guess even C has issues on those systems right?
>
> If we stick to ARM (like Cortex-M3) there are no issues other than
> memory limitations, and it generally concerns mostly RAM, as code memory
> is usually big enough (hundreds of kB usually, up to 512kB, sometimes
> 1MB). That's why you cannot get too fancy with your code, and -
> unfortunately - most of nice programming "tricks" are
> dynamic-memory-only...
>
> On the other hand, maybe I should ask what do you consider "an issue"?
> There's definitely nothing "non-standard" in the C/C++ that you use here
> - there's just no OS (but you can have an RTOS - scheduler), no POSIX
> (but there are POSIX-like RTOSes) and not-a-lot of RAM (there's no
> library for fixing that [; ).
>
> 4\/3!!

I don't really have much embedded experience besides assembly programming in the old days (Z80, M68000, x86, MIPS, self build processor for digital circuits class).

My understanding is that the processors the micro-controler class, the ones with memory in the order of bytes or kilobytes, usually C compilers that only implement part of the ANSI standard, given the hardware constraints.

Meaning just a very small subset of data types is supported, limited library support and lots of compiler extensions to make use of the processor and on die ports.

I used to follow PIC articles for a while in the Elektor magazine, hence
my fuzzy knowledge about this.

--
Paulo





January 22, 2013
On Tuesday, 22 January 2013 at 21:02:32 UTC, Paulo Pinto wrote:
> I don't really have much embedded experience besides assembly programming in the old days (Z80, M68000, x86, MIPS, self build processor for digital circuits class).
>
> My understanding is that the processors the micro-controler class, the ones with memory in the order of bytes or kilobytes, usually C compilers that only implement part of the ANSI standard, given the hardware constraints.
>
> Meaning just a very small subset of data types is supported, limited library support and lots of compiler extensions to make use of the processor and on die ports.

Nothing like this here - you have all types, you have complete libm, libc and stdlibc++ with everything you need. There are no compiler extensions other than a typical GCC __attribute__ used to declare interrupts, which is not really necessary on most Cortex-M3 chips. These are really powerful chips with 1.25DMIPS/MHz and clocks around 70MHz (ranging from 24MHz to 204MHz)... There's even a dual-core chip - LPC43xx which has Cortex-M4F (with single precision hardware FPU and some SIMD instructions) and a Cortex-M0, both running at 204MHz <:

So these are not very much like 8-bit microcontrollers (AVR, PIC, ...)

That's why I think D would fit such chips quite nice (; Sans the GC of course... Maybe without exceptions too, but I don't think that would be possible (it's pretty hard in C++)...

4\/3!!
January 23, 2013
On Tuesday, 22 January 2013 at 11:41:14 UTC, Sergei Nosov wrote:
> But the trend is C is becoming more and more a high-level assembler.


http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html


This blog post (the first in a series of three) tries to explain some of these issues so that you can better understand the tradeoffs and complexities involved, and perhaps learn a few more of the dark sides of C.
It turns out that C is not a "high level assembler" like many experienced C programmers (particularly folks with a low-level focus) like to think, and that C++ and Objective-C have directly inherited plenty of issues from it.
January 23, 2013
On Tuesday, 22 January 2013 at 21:14:21 UTC, Freddie Chopin wrote:
> On Tuesday, 22 January 2013 at 21:02:32 UTC, Paulo Pinto wrote:
>> I don't really have much embedded experience besides assembly programming in the old days (Z80, M68000, x86, MIPS, self build processor for digital circuits class).
>>
>> My understanding is that the processors the micro-controler class, the ones with memory in the order of bytes or kilobytes, usually C compilers that only implement part of the ANSI standard, given the hardware constraints.
>>
>> Meaning just a very small subset of data types is supported, limited library support and lots of compiler extensions to make use of the processor and on die ports.
>
> Nothing like this here - you have all types, you have complete libm, libc and stdlibc++ with everything you need. There are no compiler extensions other than a typical GCC __attribute__ used to declare interrupts, which is not really necessary on most Cortex-M3 chips. These are really powerful chips with 1.25DMIPS/MHz and clocks around 70MHz (ranging from 24MHz to 204MHz)... There's even a dual-core chip - LPC43xx which has Cortex-M4F (with single precision hardware FPU and some SIMD instructions) and a Cortex-M0, both running at 204MHz <:
>
> So these are not very much like 8-bit microcontrollers (AVR, PIC, ...)
>
> That's why I think D would fit such chips quite nice (; Sans the GC of course... Maybe without exceptions too, but I don't think that would be possible (it's pretty hard in C++)...
>
> 4\/3!!

Thanks for the valuable explanation.
January 23, 2013
Am 23.01.2013 08:59, schrieb Mehrdad:
> On Tuesday, 22 January 2013 at 11:41:14 UTC, Sergei Nosov wrote:
>> But the trend is C is becoming more and more a high-level assembler.
>
>
> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
>
>
> This blog post (the first in a series of three) tries to explain some of
> these issues so that you can better understand the tradeoffs and
> complexities involved, and perhaps learn a few more of the dark sides of C.
> It turns out that C is not a "high level assembler" like many
> experienced C programmers (particularly folks with a low-level focus)
> like to think, and that C++ and Objective-C have directly inherited
> plenty of issues from it.

Yes, I keep repeating that.

Many developers have no idea that modern CPUs do lots of things that invalidate the concept of C as a "high level assembler".

Most likely fuelled by the fact that many don't learn modern CPU architectures nowadays.

--
Paulo
January 24, 2013
On Wednesday, 23 January 2013 at 21:14:26 UTC, Paulo Pinto wrote:

> Many developers have no idea that modern CPUs do lots of things that invalidate the concept of C as a "high level assembler".
>

Paolo, the most important features of C makes it "high-level assembler"
it are pointer and its arithmetics. What hell "modern CPUs" doing wrong with it?

Thanks,
Oleg.
January 24, 2013
On Thu, Jan 24, 2013 at 04:59:15AM +0100, Oleg Kuporosov wrote:
> On Wednesday, 23 January 2013 at 21:14:26 UTC, Paulo Pinto wrote:
> 
> >Many developers have no idea that modern CPUs do lots of things that invalidate the concept of C as a "high level assembler".
> >
> 
> Paolo, the most important features of C makes it "high-level assembler" it are pointer and its arithmetics. What hell "modern CPUs" doing wrong with it?
[...]

For one thing, modern CPUs have pipelines and caches. Modern optimizing C compilers often rearrange instructions in order to maximize performance by reducing (or eliminating) pipeline hazards and cache misses. The resulting assembly code often look nothing like the source code. (In fact, some CPUs do this internally as well, and optimizing compilers often rearrange the code in order to take maximum advantage of what the CPU is doing.)

The result of this is that many of the so-called "optimizations" that C programmers like to do by hand (and I am among them) actually have no real benefits, and in fact, sometimes has worse performance because it obscures your intent to the compiler, so the compiler is unable to produce the best code for it.


T

-- 
Ignorance is bliss... but only until you suffer the consequences!
January 24, 2013
On Thursday, 24 January 2013 at 04:31:24 UTC, H. S. Teoh wrote:
> For one thing, modern CPUs have pipelines and caches. Modern optimizing C compilers often rearrange instructions in order to maximize performance by reducing (or eliminating) pipeline hazards and cache misses. The resulting assembly code often look nothing like the source code. (In fact, some CPUs do this internally as well, and optimizing compilers often rearrange the code in order to take maximum advantage of what the CPU is doing.)
>
> The result of this is that many of the so-called "optimizations" that C programmers like to do by hand (and I am among them) actually have no real benefits, and in fact, sometimes has worse performance because it obscures your intent to the compiler, so the compiler is unable to produce the best code for it.

 I remember doing things like that. If I was dividing something by 8 I would shift right instead by 3; Although it does the same job it hides my intent from readers (and even myself) and might not allow the compiler to make better optimizations because of it. I ended up coming to the conclusion after reading a lot on compilers that the compilers do a decent job of finding cases where it replace them with better choices and condensing code. Rather than doing math for a fixed set of calculations, it pre-calculates them and sets it into variables that are used, drops whole sections that don't do anything, etc.

 In today's CPU's the C compiler is somewhat obsolete. Half the time to make use of special hardware (say the MPU's in video cards) you need to do assembly code anyways for access to them. The compiler might make use of MMX or other instruction sets but that seems a bit more unlikely on it's own without some hints or certain code patterns that suggest heavy use where it would benefit heavily from (and the hardware flags know the target can handle said instructions).
January 24, 2013
On 1/23/2013 9:10 PM, Era Scarecrow wrote:
>   I remember doing things like that. If I was dividing something by 8 I would
> shift right instead by 3;

Compilers were doing that optimization 35 years ago, and probably decades longer than that.

Generally, if you're thinking about doing an optimization, it pays to check the output of the compiler, as it has probably beaten you to it :-)
January 24, 2013
On Thursday, 24 January 2013 at 10:21:06 UTC, Walter Bright wrote:
> On 1/23/2013 9:10 PM, Era Scarecrow wrote:
>>  I remember doing things like that. If I was dividing something by 8 I would
>> shift right instead by 3;
>
> Compilers were doing that optimization 35 years ago, and probably decades longer than that.
>
> Generally, if you're thinking about doing an optimization, it pays to check the output of the compiler, as it has probably beaten you to it :-)

Well, some micro optimization pay off, but any straightforward one is already done by the compiler. Even some tricky one in fact.

One that is not done to the best of my knowledge is the dual loop :

uint countChar(char* cstr, char c) {
    uint count;
    while(*cstr) {
        while(*cstr && *cstr != c) cstr++;

        if(*cstr == c) {
            count++;
            cstr++;
        }
    }
}

Which is faster as the straightforward way of doing things.