May 08, 2017
On 05/08/2017 08:42 PM, John Carter wrote:
> On Monday, 8 May 2017 at 20:55:02 UTC, John Carter wrote:
>>
>> C/C++ has been granted an extension of life by the likes of valgrind
>> and purify and *-sanitizer.
>
> Google makes my point for me....
>
> https://opensource.googleblog.com/2017/05/oss-fuzz-five-months-later-and.html
>

That reminds me, I've been thinking for awhile: We need a dead-simple D/dub-ified tool to fuzz-test our D projects. Even if it's just a trivial wrapper and DUB package for an existing fuzz tester (heck, probably that's the right way to go anyway) we really should make fuzz testing just as common & easy a thing for D projects as doc-generation and unittests.

May 08, 2017
On Mon, May 08, 2017 at 06:33:08PM +0000, Jerry via Digitalmars-d wrote:
> Anything that goes on the internet already has memory safety.

Is that a subtle joke, or are you being serious?

A LOT of code out in the internet, both in infrastructure and as applications, run C code.  And if you know the typical level of quality of a large C project written by 50-100 (or more) employees who have a rather high turnover, you should be peeing your pants right now. A frightening amount of C code both in infrastructure (by that I mean stuff like routers, switches, firewalls, core services like DNS, etc.) and in applications (application-level services like webservers, file servers, database servers, etc.) are literally riddled with buffer overflows, null pointer dereference bugs, off-by-1 string manipulations, and other such savorable things.

Recently I've had the dubious privilege of being part of a department wide push on the part of my employer to audit our codebases (mostly C, with a smattering of C++ and other code, all dealing with various levels of network services and running on hardware expected to be "enterprise" quality and "secure") and fix security problems and other such bugs, with the help of some static analysis tools. I have to say that even given my general skepticism about the quality of so-called "enterprise" code, I was rather shaken not only to find lots of confirmation of my gut feeling that there are major issues in our codebase, but even more by just HOW MANY of them there are.

An unsettlingly large percentage of bugs / problematic code is in the realm of not handling null pointers correctly.  The simplest is checking for null correctly at the beginning of the function, but then proceeding to dereference the possibly-null pointer with wild abandon thereafter. This may seem like not such a big problem, until you realize that all it takes is for *one* of these literally *hundreds* of instances of wrong code to get exposed to a public interface, and you have a DDOS attack waiting for you in your glorious future.

Another unsettlingly common problem is the off-by-1 error in string handling.  Actually, the most unsettling thing in this area is the pervasiveness of strcpy() and strcat() -- even after decades of experience that these functions are inherently unsafe and should be avoided if at all possible. Yet they still appear with persistent frequency, introducing hidden vulnerabilities that people overlook because, oh well, we trust the guy who wrote it 'cos he's an expert C coder, so he must have already made sure it's actually OK. Unfortunately, upon closer inspection, there are actual bugs in a large percentage of such code.  Next to this is strncpy(), the touted "safe" variant of strcpy / strcat, except that people keep writing this:

	strncpy(buf, src, sizeof(buf));

Quick, without looking: what's wrong with the above line of code?

Not so obvious, huh?  The problem is that strncpy is, in spite of being the "safe" version of strcpy, badly designed. It does not guarantee buf is null-terminated if src was too long to fit in buf!  Next thing you know -- why, hello, unterminated string used to inject shellcode into your "secure" webserver!

The "obvious" fix, of course, is to leave 1 byte for the \0 terminator:

	strncpy(buf, src, sizeof(buf)-1);

Except that this is *still* wrong, because strncpy doesn't write a '\0' to the end. You have to manually put one there:

	strncpy(buf, src, sizeof(buf)-1);
	buf[sizeof(buf)-1] = '\0';

The second line there has a -1 that lazy/careless C coders often forget, so you end up *introducing* a buffer overrun in the name of fixing another.

This single problem area (improper use of strncpy) accounts for a larger chunk of code I've audited than I dare to admit -- all just timebombs waiting for somebody to write an exploit for.

Then there's the annoyingly common matter of checking for return codes. Walter has said this before, and he's spot on: 90% of C code out there ignore error codes where they shouldn't, so as soon as a normally-working syscall fails for whatever reason, the code cascades down a chain of unexpected control flow changes and ends in catastrophe. Or rather, in silent corruption of internal data because any signs that something has gone wrong was conveniently ignored by the caller, of course. And even when you *do* meticulously check for every single darn error code evah, it's so ridiculously easy to make a blunder:

	int my_func(mytype_t *input, outbuf_t *output_buf,
	            char *buffer, int size)
	{
		/* Typical lazy way of null-checking (that will blow up
		 * later) */
		myhandle_t *h = input ? input->handle : 0;
		writer_t *w = output_buf ? output_buf->writer : 0;
		char *block = (char *)malloc(size);
		FILE *fp;
		int i;

		if (!buffer)
			return -1; /* typical useless error return code */
				/* (also, memory leak) */

		if (h->control_block) { /* oops, possible null deref */
			fp = fopen("blah", "w");
			if (!fp)
				return -1; /* oops, memory leak */
		}
		if (w->buffered) { /* oops, possible null deref */
			strncpy(buffer, input->data, size); /* oops, unterminated string */
			if (w->write(buffer, size) != 0)
				/* hmm, is 0 the error status, or is it -1? */
				/* also, what if w->write == null? */
			{
				return -1; /* oops, memory leak AND file
						descriptor leak */
			}
		}
		for (i = 0; i <= input->size; i++) {	/* oops, off-by-1 error */
			... /* more nauseating nasty stuff here */
			if (error)
				goto EXIT;
			... /* ad nauseum */
		}
	EXIT:
		if (fp) fclose(fp);	/* oops, uninitialized ptr deref */
		free(block);

		/* Typical lazy way of evading more tedious `if
		 * (func(...) == error) goto EXIT;` style code, which
		 * ends up being even more error-prone */
		return error ? -1 : w->num_bytes_written();
			/* oops, what if w or w->num_bytes_written is
			 * null? */
	}

If you look hard enough, almost every line of C code has one potential problem or another. OK, I exaggerate, but in a large codebase written by 100 people, many of whom have since left the company for greener fields, code of this sort can be found everywhere.  And nobody realizes just how bad it is, because everyone is too busy fixing pointer bugs in their own code to have time to read code written by somebody else, that doesn't directly concern them.

Another big cause of bugs is C/C++'s lovely convention of not initializing local variables.  Hello, random stack value that just so happens to be usually 0 when we test the code, but becomes something else when the customer runs it, and BOOM, the code tumbles onto an unexpected and disastrous chain of wrong steps. And you'd better be praying that this wasn't a pointer, 'cos you know what that means... if you're lucky, it's a random memory corruption that, for the most part, goes undetected until a customer with an obscure setup triggers a visible effect. Then you spend the next 6 months trying to trace the bug from the visible effect, which is long, long, past the actual cause. If you're not so lucky, though, this could result in leakage of unrelated memory locations (think Cloudbleed) or worse, arbitrary code execution. Hello, random remote hacker, welcome to DoD Nuclear Missile Control Panel. Who would you like to nuke today?

C++ has a lovely footnote to add to this: class/struct members aren't initialized by default, so the ctor has to do it, *explicitly*.  How many programmers aren't too lazy to just skip this part and trust that the methods will initialize whatever members need initializing?  Keep in mind that a lot of C++ code out there has yet to catch up with the latest "correct" C++ coding style -- there's still way too much god-object classes with far too many fields than they should have, and of course, the guy who wrote the ctor was too lazy to initialize all of them explicitly. (You should be thankful already that he wasn't too lazy to skip writing the ctor altogether!) And inevitably, later on some method will read the uninitialized value and do something unexpected. This doesn't happen when you first write the class, of course. But 50 ex-employees later, 'tis a whole new landscape out there.

These are some of the simpler common flaws I've come across.  I've also seen several very serious bugs that could lead to actual remote exploits, if somebody tried hard enough to find a path to them from the outside.

tl;dr: the C language simply isn't friendly towards memory-safe code. Most C coders (including myself, I'll admit, before this code audit) are unaware of just how bad it is, because over the years, we've accumulated a set of idioms of how to write safe code (and the scars to prove that, at least in *some* cases, they result in safer code). Too bad our accumulated wisdom isn't enough to prevent *all* of the blunders that we still regular commit, except now we're even less aware of them, because, after all, we are experienced C coders now, so surely we've outgrown such elementary mistakes!  Due to past experience we've honed our eagle eyes to catch mistakes we've learned from... unfortunately, that also distracts us from *other* mistakes the language also happily lets slip by. It gives us a false sense of security that we could easily detect blunders just by eyeing the code carefully.  I was under that false sense... until I participated in the audit, and discovered to my chagrin that there are a LOT of other bugs that I often fail to catch because I simply didn't have them in mind, or they were more subtle to detect, or just by pure habit I was looking in other directions and therefore missed an otherwise obvious problem spot.

Walter is probably right that one of C's biggest blunders was to conflate arrays and pointers.  I'd say 85-90% of the bugs I found were directly or indirectly caused by C arrays not carrying length information along with the reference to the data, of which the misuse of strncpy and off-by-1 errors in loops are prime examples. Another big part of C's unsafety is the legacy C library that contains far too many misdesigned safety-bombs like strcpy and strcat, that are there merely for legacy reasons but really ought to have been killed with fire 2 decades ago. You'd think people know better by now, but no, they STILL keep writing code that calls these badly designed functions...

In this day and age of automated exploit-hunting bots, it's only a matter of time before somebody, or some*thing*, discovers that sending a certain kind of packet to a certain port on a certain firewall produces an unusual response... and pretty soon, somebody is worming his way into your supposedly secure local network and doing who knows what.  And it's scary how much poorly-written C code is running on the targeted machine, and how much more poor C code is being written everyday, even today, for stuff that's going to be running on the backbone of the internet or on some wide-impact online applications (hello, Heartbleed!).  Something's gonna give eventually.

As I was participating in the code audit, I couldn't help thinking how many of D's features would have outright prevented a large percentage of the bugs I've found before the code found its way into production.

1) D arrays have length! Imagine that!  This singlehandedly eliminates an entire class of bugs with one blow. No more strcpy/strncpy monstrosities. No more buffer overruns -- thanks to array bounds checks. Not to mention slices eliminating the need for the ubiquitous string copying in C/C++ that represents a not-often-thought-of background source of performance degradation.

2) D variables are initialized by default: this would have prevented all of the uninitialized value bugs I found. And where performance matters (hint: it usually doesn't matter where you think it does -- please fess up: which C coder here regularly uses a profiler? Probably only a minority.), you ask for =void explicitly. So when there's a bug involving uninitialized values later, the offending line is easily found.

3) Exceptions, love it or hate it, dispense with that horrible C idiom of repeatedly writing `if (func(...)!=OK) goto EXIT;` that's so horribly error-prone, and so non-DRY that the programmer is basically motivated to find an excuse NOT to write it that way (and thereby inevitably introduce a bug).  Here I've to add that there's this perverse thought out there that C++ exceptions are "expensive", and so in the name of performance people would outlaw using try/catch blocks, using instead homegrown (and inevitably buggy -- and not necessarily less expensive) alternatives instead.  All the while ignoring the fact that C/C++ arrays being what they are, too much array copying (esp. string copying) is happening where in D you'd just be taking a slice in O(1) time.

4) D switches would eliminate certain very nasty bugs that I've discovered involving some code assuming that a variable can only hold certain values, but it actually doesn't, in which case nasty problems happen.  In D, a non-final switch requires a default case... seemingly onerous but prevents this class of bugs.  Also, the deprecation of switch case fallthrough is a step in the right direction, along with goto switch for when you *want* fallthrough.  Inadvertent fallthrough was one of the bugs I found that happens every so often -- but at the same time there were a number of false positives where fallthrough was intentional.  D solves both problems by having the code explicitly document intent with goto case.

5) scope(exit) is also great for preventing resource leaks in a way that
doesn't require "code at a distance" (very error prone).

The one big class of issues that D doesn't solve is null pointer handling.  Exceptions help to some extent by making it possible to do a check at the top and aborting immediately if something is null, but it's still possible to have some nasty null pointer bugs in D.  Fortunately, D mostly dispenses with the need to directly manipulate pointers, so the surface area for bugs is somewhat smaller than in C (and older-style C++ code -- unfortunately still prevalent even today).

A good part of the null pointer bugs I found were related to C's lack of closures -- D's delegates would obviate the need for direct pointer manipulation in this case, even if D still suffers from null handling issues.

Anyway, the point of all this is that C/C++'s safety problems are very real, and C/C++ code is very widespread in online services and more C/C++ code is still being written every day for online services, and a lot of that code is still being affected by the lack of safety in C/C++. So safety problems in C/C++ are very relevant today, and will continue being a major concern in the near future.

If we can complete the implementation of SafeD (and plug the existing holes in @safe), it could have significant impact in this area.


T

-- 
Not all rumours are as misleading as this one.
May 09, 2017
On Tuesday, 9 May 2017 at 06:15:12 UTC, H. S. Teoh wrote:
> On Mon, May 08, 2017 at 06:33:08PM +0000, Jerry via Digitalmars-d wrote:
>
>
> 	strncpy(buf, src, sizeof(buf));
>
> Quick, without looking: what's wrong with the above line of code?
>
> Not so obvious, huh?  The problem is that strncpy is, in spite of being the "safe" version of strcpy, badly designed. It does not guarantee buf is null-terminated if src was too long to fit in buf!  Next thing you know -- why, hello, unterminated string used to inject shellcode into your "secure" webserver!
>
> The "obvious" fix, of course, is to leave 1 byte for the \0 terminator:
>
> 	strncpy(buf, src, sizeof(buf)-1);
>
> Except that this is *still* wrong, because strncpy doesn't write a '\0' to the end. You have to manually put one there:
>
> 	strncpy(buf, src, sizeof(buf)-1);
> 	buf[sizeof(buf)-1] = '\0';
>
> The second line there has a -1 that lazy/careless C coders often forget, so you end up *introducing* a buffer overrun in the name of fixing another.
>
> This single problem area (improper use of strncpy) accounts for a larger chunk of code I've audited than I dare to admit -- all just timebombs waiting for somebody to write an exploit for.
>
Adding to that, strncpy() is also a performance trap. strncpy will not stop when the input string is finished, it will fill the buffer up with 0.
so

char buff[4000];
strncpy(buff, "hello", sizeof buff);

will write 4000 bytes on every call.

The thing with strncpy() is that it's a badly named function. It is named as a string function but isn't a string function. Had it been named as memncpy() or something like that, it wouldn't confuse most C programmers. If I get my C lore right, the function was initially written for writing the file name in the Unix directory strncpy(dirent, filename, 14); or something like that.


May 09, 2017
On Tuesday, 9 May 2017 at 06:15:12 UTC, H. S. Teoh wrote:
>
> 	int my_func(mytype_t *input, outbuf_t *output_buf,
> 	            char *buffer, int size)
> 	{
> 		/* Typical lazy way of null-checking (that will blow up
> 		 * later) */
> 		myhandle_t *h = input ? input->handle : 0;
> 		writer_t *w = output_buf ? output_buf->writer : 0;
> 		char *block = (char *)malloc(size);

Hey, you've been outed as a C++ programmer. A real C programmer never casts a void *. In that specific case, casting away the malloc() return can mask a nasty bug. If you have forgotten to include the header declaring the function, the compiler would assume an int returning function and the cast would suppress the righteous warning message of the compiler. On 64 bit machines the returned pointer would be truncated to the lower half. Unfortunately on Linux, as the heap starts in the lower 4 GiB of address space, the code would run for a long time before it crashed. On Solaris-SPARC it would crash directly as binaries are loaded address 0x1_0000_0000 of the address space.

> 		FILE *fp;
> 		int i;
>
> 		if (!buffer)
> 			return -1; /* typical useless error return code */
> 				/* (also, memory leak) */
>
> 		if (h->control_block) { /* oops, possible null deref */
> 			fp = fopen("blah", "w");
> 			if (!fp)
> 				return -1; /* oops, memory leak */
> 		}
> 		if (w->buffered) { /* oops, possible null deref */
> 			strncpy(buffer, input->data, size); /* oops, unterminated string */
> 			if (w->write(buffer, size) != 0)
> 				/* hmm, is 0 the error status, or is it -1? */
> 				/* also, what if w->write == null? */

Or is it inspired by fwrite, which returns the number of written records? In that case 0 return might be an error or not, depends on size.

> 			{
> 				return -1; /* oops, memory leak AND file
> 						descriptor leak */
> 			}
> 		}
> 		for (i = 0; i <= input->size; i++) {	/* oops, off-by-1 error */
> 			... /* more nauseating nasty stuff here */
> 			if (error)
> 				goto EXIT;
> 			... /* ad nauseum */
> 		}
> 	EXIT:
> 		if (fp) fclose(fp);	/* oops, uninitialized ptr deref */

Worse, you didn't check the return of fclose() on writing FILE. fclose() can fail if the disk was full. As the FILE is buffered, the last fwrite might not have flushed it yet. So it is the fclose() that will try to write the last block and that can fail, but the app wouldn't be able to even report it.

> 		free(block);
>
> 		/* Typical lazy way of evading more tedious `if
> 		 * (func(...) == error) goto EXIT;` style code, which
> 		 * ends up being even more error-prone */
> 		return error ? -1 : w->num_bytes_written();
> 			/* oops, what if w or w->num_bytes_written is
> 			 * null? */
> 	}
>

May 09, 2017
On Tuesday, 9 May 2017 at 06:15:12 UTC, H. S. Teoh wrote:
> On Mon, May 08, 2017 at 06:33:08PM +0000, Jerry via Digitalmars-d wrote:
>> [...]
>
> Is that a subtle joke, or are you being serious?
>
> [...]

Heh, I saw you wrote the post and knew it would be long, then I kept scrolling and scrolling... :)

Please, please, please submit this as a post on the D blog, perhaps prefaced by the Walter/Scott exchange and with some links to the issues you mention and the relevant portions of the D reference.  I think it would do well.
May 09, 2017
On Tuesday, 9 May 2017 at 09:22:13 UTC, Joakim wrote:
> On Tuesday, 9 May 2017 at 06:15:12 UTC, H. S. Teoh wrote:
>> On Mon, May 08, 2017 at 06:33:08PM +0000, Jerry via Digitalmars-d wrote:
>>> [...]
>>
>> Is that a subtle joke, or are you being serious?
>>
>> [...]
>
> Heh, I saw you wrote the post and knew it would be long, then I kept scrolling and scrolling... :)
>
> Please, please, please submit this as a post on the D blog, perhaps prefaced by the Walter/Scott exchange and with some links to the issues you mention and the relevant portions of the D reference.  I think it would do well.

+1
May 09, 2017
On Saturday, 6 May 2017 at 17:59:38 UTC, thedeemon wrote:
> On Saturday, 6 May 2017 at 06:26:29 UTC, Joakim wrote:
>> Walter: I believe memory safety will kill C.
>
> And then null safety will kill D. ;)

I actually think this is more likely than memory safety killing C. Just because both are very important but D is just easier to kill than C for historical reasons.
May 09, 2017
On 5/8/2017 1:55 PM, John Carter wrote:
> On Saturday, 6 May 2017 at 06:26:29 UTC, Joakim wrote:
>
>> Walter: I believe memory safety will kill C.
>
> C/C++ has been granted an extension of life by the likes of valgrind and purify
> and *-sanitizer.

I agree. But one inevitably runs into problems relying on valgrind and other third party tools:

1. it isn't part of the language

2. it may not be available on your platform

3. somebody has to find it, install it, and integrate it into the dev/test process

4. it's incredibly slow to run valgrind, so there are powerful tendencies to skip it

valgrind is a stopgap measure, and has saved me much grief over the years, but it is NOT the solution.

May 09, 2017
On Tuesday, 9 May 2017 at 14:13:31 UTC, Walter Bright wrote:
> On 5/8/2017 1:55 PM, John Carter wrote:
>> On Saturday, 6 May 2017 at 06:26:29 UTC, Joakim wrote:
>>
>>> Walter: I believe memory safety will kill C.
>>
>> C/C++ has been granted an extension of life by the likes of valgrind and purify
>> and *-sanitizer.
>
> I agree. But one inevitably runs into problems relying on valgrind and other third party tools:
>
> 1. it isn't part of the language
>
> 2. it may not be available on your platform
>
> 3. somebody has to find it, install it, and integrate it into the dev/test process
>
> 4. it's incredibly slow to run valgrind, so there are powerful tendencies to skip it
>
> valgrind is a stopgap measure, and has saved me much grief over the years, but it is NOT the solution.

And it doesn't catch everything. I had the case yesterday at work where one of the file converters that had been written 15 years ago, suddenly crashed in production*. It came from an upstream bug in a script that filled one attribute with garbage. I tried to reproduce the bug in the development environment and funily it didn't crash with newest version of the base library. The production library is one version behind. The garbage in the attribute triggered a buffer overflow in a fixed size array (496 UTF-16 characters in a buffer of 200 character size). This converter is one of the last one with fixed sized arrays. The interesting observation was that valgrind catches the buffer overflow when linked with version 2.31 of the main library but is completely silent when using version 2.32. The changes in that library are minimal and in parts that have nothing to do with this app. It is solely the placement of variables in data iand the bss that change. It is surprizing to see such a big buffer overflow completely missed by valgrind.

TL;DR valgrind does not always catch buffer overflows especially if the memory overwritten is not in the head but in the data or the bss segment. There it cannot add guard pages as it does on the heap.

* To give a little context. I work at the European Commission on the central translation memory system called Euramis (probably the biggest in the world with more than a billion segments). The system is used intensively by all translators of all European institutions and without it, nothing would be possible. The issue with it is that the back end is written in C and the code goes back to 1990. Me and my colleagues managed to modernize the system and catch most of the code issues with intensive use of C99 idioms, newest gcc and clang diagnostics and also valgrind and such things.
May 09, 2017
On Tuesday, 9 May 2017 at 09:22:13 UTC, Joakim wrote:
> On Tuesday, 9 May 2017 at 06:15:12 UTC, H. S. Teoh wrote:
>> On Mon, May 08, 2017 at 06:33:08PM +0000, Jerry via Digitalmars-d wrote:
>>> [...]
>>
>> Is that a subtle joke, or are you being serious?
>>
>> [...]
>
> Heh, I saw you wrote the post and knew it would be long, then I kept scrolling and scrolling... :)
>
> Please, please, please submit this as a post on the D blog, perhaps prefaced by the Walter/Scott exchange and with some links to the issues you mention and the relevant portions of the D reference.  I think it would do well.

+=1; Yes, good idea!