December 14, 2014
On Sunday, 14 December 2014 at 14:09:57 UTC, Joakim wrote:
> On Sunday, 14 December 2014 at 11:53:56 UTC, Paulo Pinto wrote:
>> I have seen this in every project where we replaced legacy C++ systems by new ones implemented in .NET and Java.
>>
>> First people will complain that the performance isn't comparable, they are bloated, and so on.
>>
>> The project goes forward, as it was a management decision.
>>
>> Then it goes live, some hiccups that make existing C++ developers rejoice that they were right after all
>>
>> Lots of bug reports get generated and application performance gets fine-tuned.
>>
>> A few months later systems are running, end users barely see any difference and a few C++ developers saying that the new systems aren't that bad after all.
>
> I don't doubt that this has been your experience on enterprise projects with a known and stable userbase, but you can't tell me you were able to support the same amount of users per server using java/.net as C++.  Paying for more servers but less for java/.net development may be a worthwhile tradeoff for such stable enterprise rollouts, but any time you have to scale, I doubt java/.net can keep up.
>
> As always, different tools for different uses.  Hopefully, D can one day be polished and mainstream enough for the enterprise use case and it will be efficient enough to be deployed at scale too. :)


You mean scale like Twitter and LinkedIn?

On my case, two examples of such project was a software stack for network monitoring, data aggregation and monitoring for mobile networks all the way down to network elements.

The old system was a mix of Perl, C++/CORBA and Motif. The new system is all Java, with small C stack for resource constrained elements.

Another example was replacing C++ applications in medicine image analysis with  90% .NET stack and a mix of C++/Assembly for image filters and driver P/Invoke glue.

The problem is that the average coders don't learn to optimize code and in the end most business will just shell out money for more hardware than software development time.

We had high performance systems running with a stable GC cycles, after doing the required optimization work. That is why tools like VisualVM and Mission Control exist.


--
Paulo




December 14, 2014
On Sunday, 14 December 2014 at 17:09:31 UTC, Paulo Pinto wrote:
> On Sunday, 14 December 2014 at 14:09:57 UTC, Joakim wrote:
>>
>> I don't doubt that this has been your experience on enterprise projects with a known and stable userbase, but you can't tell me you were able to support the same amount of users per server using java/.net as C++.  Paying for more servers but less for java/.net development may be a worthwhile tradeoff for such stable enterprise rollouts, but any time you have to scale, I doubt java/.net can keep up.
>
> You mean scale like Twitter and LinkedIn?

Java NIO has the potential to be really scalable, and the new Netty apparently uses it to great effect.  You'll never be able to park as many connections using Java as you would in C, but concurrent throughput is probably pretty close when done properly.

My issue with Java is just that because of how the library is designed, you're fighting against it by trying to limit dynamic allocations, so it will probably never be a terribly natural experience.  At the same time, it is waaaaay easier to find competent Java programmers, which is a significant factor when making a business decision.

My personal preference is still for C++ done in a similar manner as vibe.d as I think it's the sweet spot between ease of use and scalability provided you have a talented team, but I've seen Java be used successfully for servicing hundreds of millions of users with a high concurrent throughput.
December 14, 2014
On Sunday, 14 December 2014 at 17:09:31 UTC, Paulo Pinto wrote:
> You mean scale like Twitter and LinkedIn?

Maybe that's why they still lose money hand over fist, especially Twitter, because of all the extra servers they have to buy. :p By comparison, Whatsapp was able to put millions of users on a server with erlang and become profitable with much less revenue:

http://forum.dlang.org/post/bmvwftlyvlgmuehrtvlg@forum.dlang.org

> On my case, two examples of such project was a software stack for network monitoring, data aggregation and monitoring for mobile networks all the way down to network elements.
>
> The old system was a mix of Perl, C++/CORBA and Motif. The new system is all Java, with small C stack for resource constrained elements.
>
> Another example was replacing C++ applications in medicine image analysis with  90% .NET stack and a mix of C++/Assembly for image filters and driver P/Invoke glue.

It is instructive that you're dropping down to C/C++/Assembly in each of these examples: that's not really making the case for java/.net on their own.

> The problem is that the average coders don't learn to optimize code and in the end most business will just shell out money for more hardware than software development time.

Yeah, it's all about the particular job and what the tradeoffs are there.  Most online apps don't need to scale to extremes, which is why they're mostly not written in C++.
December 14, 2014
On Sunday, 14 December 2014 at 18:25:26 UTC, Joakim wrote:
> On Sunday, 14 December 2014 at 17:09:31 UTC, Paulo Pinto wrote:
>> You mean scale like Twitter and LinkedIn?
>
> Maybe that's why they still lose money hand over fist, especially Twitter, because of all the extra servers they have to buy. :p By comparison, Whatsapp was able to put millions of users on a server with erlang and become profitable with much less revenue:
>
> http://forum.dlang.org/post/bmvwftlyvlgmuehrtvlg@forum.dlang.org

I don't really understand how you can simultaneously advertise Erlang and bash Java for inefficiency.  I think the core concurrency model in Erlang is really great, and it scales horizontally to great effect, but it's a bear to do TCP work in and is far less efficient than Java, let alone C++.
December 14, 2014
On Sunday, 14 December 2014 at 18:25:26 UTC, Joakim wrote:
> On Sunday, 14 December 2014 at 17:09:31 UTC, Paulo Pinto wrote:
>> You mean scale like Twitter and LinkedIn?
>
> Maybe that's why they still lose money hand over fist, especially Twitter, because of all the extra servers they have to buy. :p By comparison, Whatsapp was able to put millions of users on a server with erlang and become profitable with much less revenue:
>
> http://forum.dlang.org/post/bmvwftlyvlgmuehrtvlg@forum.dlang.org
>
>> On my case, two examples of such project was a software stack for network monitoring, data aggregation and monitoring for mobile networks all the way down to network elements.
>>
>> The old system was a mix of Perl, C++/CORBA and Motif. The new system is all Java, with small C stack for resource constrained elements.
>>
>> Another example was replacing C++ applications in medicine image analysis with  90% .NET stack and a mix of C++/Assembly for image filters and driver P/Invoke glue.
>
> It is instructive that you're dropping down to C/C++/Assembly in each of these examples: that's not really making the case for java/.net on their own.

I was expecting a comment like that. :)

On the first case, certain network elements are quite resource constrained,
so you just have a real time OS, doing SNMP stuff and other control operations. So only a small C library could bt delivered on those.

Everything else capable of running a JIT enabled JVM was doing so.

On the medicine example, the C++/Assembly code was being used in two cases:

- SIMD (maybe with the upcoming SIMD support on .NET, this wouldn't be needed any longer)

- COM, many manufacturers provide only COM drivers for their devices, no way around it. If the was public, .NET networking code could have been used instead as the devices used ethernet.


>
>> The problem is that the average coders don't learn to optimize code and in the end most business will just shell out money for more hardware than software development time.
>
> Yeah, it's all about the particular job and what the tradeoffs are there.  Most online apps don't need to scale to extremes, which is why they're mostly not written in C++.

Yes, agreed there.

However, there are many places that developers use C and C++, not because of speed, rather because for the last decades all the other programming languages with native code compilers faded away.

With the current ahead of time native compilation renaissance and GPU support on other languages, the need for C and C++ in such use cases will decrease.

Still there are quite a few places where C and C++ will matter (e.g. embedded, OS drivers, HPC, AAA games, ...) in regards to overall computing landscape.

--
Paulo
December 14, 2014
Im sorry to hear that.

So many things went wrong here. This is the way i see it.

Basically what you did was introduce a friend of yours to your
colleagues with good impressions on just about everything. This
is like introducing a friend to me and telling me his a great
person even though you know deep down inside you know your friend
doesnt have the best personality in the world.

What you could have done was be sincere and told your colleagues
that D is a great person overall but he does have his drawbacks.
That way they know what to expect and are ready and prepared to
handle D.

Along with that you could have inspired them on a different point
of view. You guys are working with JavaScript :/. Doing something
with D would have been an extraordinary achievement. Thats like
accomplishing a task with a common method. But sometimes you have
to take a leap of faith and accomplish that task in a new way.
This would have given you and your team unique in a way and
probably have given you a good image.
December 14, 2014
On 12/14/2014 12:37 AM, Manu via Digitalmars-d wrote:
> There were a few contributing factors, but by far the most significant
> factor was the extremely poor Windows environment support and Visual
> Studio debugging experience.

This is valuable information, while it's fresh in your mind can you please submit a bugzilla issue for each point?


> They then made HUGE noises about the quality of documentation. The
> prevailing opinion was that the D docs, in the eyes of a
> not-a-D-expert, are basically unreadable to them. The formatting
> didn't help, there's a lot of noise and a lack of structure in the
> documentation's presentation that makes it hard to see the information
> through the layout and noise. As senior software engineers, they
> basically expected that they should be able to read and understand the
> docs, even if they don't really know the language, after all, "what is
> the point of documentation if not to teach the language..."
> I tend to agree, I find that I can learn most languages to a basic
> level by skimming the docs, but the D docs are an anomaly in this way;
> it seems you have to already know D to be able to understand it
> effectively. They didn't know javascript either, but skimming the
> node.js docs they got the job done in an hour or so, after having
> wasted *2 days* trying to force their way through the various
> frictions presented but their initial experience with D.

I understand what you're saying, but I find it hard to figure out just what to do to change it. Any specific suggestions?
December 14, 2014
On 2014-12-14 09:37, Manu via Digitalmars-d wrote:

> They immediately made comments about goto-definition not working, and
> auto-completion not working reliably.

I'm curious to know which tools they used to get autocompletion and go-to-definition to work with JavaScript.

-- 
/Jacob Carlborg
December 14, 2014
On Sunday, 14 December 2014 at 19:40:16 UTC, Walter Bright wrote:
> On 12/14/2014 12:37 AM, Manu via Digitalmars-d wrote:
>> There were a few contributing factors, but by far the most significant
>> factor was the extremely poor Windows environment support and Visual
>> Studio debugging experience.
>
> This is valuable information, while it's fresh in your mind can you please submit a bugzilla issue for each point?
>
>
>> They then made HUGE noises about the quality of documentation. The
>> prevailing opinion was that the D docs, in the eyes of a
>> not-a-D-expert, are basically unreadable to them. The formatting
>> didn't help, there's a lot of noise and a lack of structure in the
>> documentation's presentation that makes it hard to see the information
>> through the layout and noise. As senior software engineers, they
>> basically expected that they should be able to read and understand the
>> docs, even if they don't really know the language, after all, "what is
>> the point of documentation if not to teach the language..."
>> I tend to agree, I find that I can learn most languages to a basic
>> level by skimming the docs, but the D docs are an anomaly in this way;
>> it seems you have to already know D to be able to understand it
>> effectively. They didn't know javascript either, but skimming the
>> node.js docs they got the job done in an hour or so, after having
>> wasted *2 days* trying to force their way through the various
>> frictions presented but their initial experience with D.
>
> I understand what you're saying, but I find it hard to figure out just what to do to change it. Any specific suggestions?

One thing I ran into often when I was inexperienced with D:
  the template constraints make some signatures extremely messy, and it takes a while to figure out when you have e.g. 3 template functions of the same name in
std.algorithm, all with crypric signatures.

Example:

ptrdiff_t countUntil(alias pred = "a == b", R, Rs...)(R haystack, Rs needles) if (isForwardRange!R && Rs.length > 0 && isForwardRange!(Rs[0]) == isInputRange!(Rs[0]) && is(typeof(startsWith!pred(haystack, needles[0]))) && (Rs.length == 1 || is(typeof(countUntil!pred(haystack, needles[1..$])))));
ptrdiff_t countUntil(alias pred = "a == b", R, N)(R haystack, N needle) if (isInputRange!R && is(typeof(binaryFun!pred(haystack.front, needle)) : bool));

countUntil is trivial to use, but the docs make it seem complicated
and it takes a while to read them.
(This is not really a good example as with countUntil it's not *that*
 bad, but I think it should be enough to show the problem)

In this specific case, it would be useful if the constraint was somehow separated from the rest of the signature and less emphasized (CSS).

Also, in this example, the documentation text itself immediately goes into  detail (e.g. mentioning startsWith!) instead of starting with a simple explanation of the concept.

I think this could be helped somewhat if the example

This is one example of "too much noise".


Example of 'not teaching the language':

For the first few months using D, I had no idea what D 'range' (I knew Python) is and it made the docs harder to understand.

I think most or all mentions of terms important in D such as 'range' should link to a *simple* explanation of what a range/whatever is (maybe in Ali Cehreli's book?).


Also... some docs are just plain lazy ("does something similar but not the same as this C function we couldn't even be arsed to link to"), missing examples (or missing *useful* examples, etc.)
- A lot of docs assume the reader knows some specific other language, not only C


I occasionally try to push minor doc fixes but currently I'm rather swamped, may do some work on that next summer.


December 14, 2014
On Sunday, 14 December 2014 at 20:44:17 UTC, Jacob Carlborg wrote:
> On 2014-12-14 09:37, Manu via Digitalmars-d wrote:
>
>> They immediately made comments about goto-definition not working, and
>> auto-completion not working reliably.
>
> I'm curious to know which tools they used to get autocompletion and go-to-definition to work with JavaScript.

It's Webstorm. There you get auto-completion, go-to-definition, refactor, insanely superior debugger, anything that you can dream of. I'm not a Jetbrains sales manager, I'm just doing Node.js coding at my work. :)