August 25, 2015
On 08/25/2015 09:03 AM, Sönke Ludwig wrote:
> The performance benefit comes from the fact that almost all of JSON is a subset of ASCII, so that lexing the input will implicitly validate it as correct UTF. The only places where actual UTF sequences can occur is in string literals outside of escape sequences. Depending on the type of document, that can result is a lot less conditionals compared to a full validation of the input.

I see, then we should indeed exploit this fact and offer lexing of ubyte[]-ish ranges.
August 25, 2015
Will try to convert a piece of code I wrote a few days ago. https://github.com/MartinNowak/rabbitmq-munin/blob/48c3e7451dec0dcb2b6dccbb9b4230b224e2e647/src/app.d Right now working with json for trivial stuff is a pain.
August 25, 2015
On Tuesday, 25 August 2015 at 06:56:23 UTC, Sönke Ludwig wrote:
> If I have a string variable and I want to store the upper case version of another string, the direct mental translation is "dst = toUpper(src);" - and not "dst = toUpper(src).array;".

One can also say the problem is that you have a string variable.
August 25, 2015
On 08/25/2015 08:18 AM, Martin Nowak wrote:
> On 08/18/2015 12:21 AM, Andrei Alexandrescu wrote:
>> * All new stuff should go in std.experimental. I assume "stdx" would
>> change to that, should this work be merged.
>
> Though stdx (or better std.x) would have been a prettier and more
> exciting name for std.experimental to begin with.
>

The great thing about the experimental package is that we are actually allowed to rename it. :-)
August 25, 2015
On 8/25/15 11:02 AM, Timon Gehr wrote:
> On 08/25/2015 08:18 AM, Martin Nowak wrote:
>> On 08/18/2015 12:21 AM, Andrei Alexandrescu wrote:
>>> * All new stuff should go in std.experimental. I assume "stdx" would
>>> change to that, should this work be merged.
>>
>> Though stdx (or better std.x) would have been a prettier and more
>> exciting name for std.experimental to begin with.
>>
>
> The great thing about the experimental package is that we are actually
> allowed to rename it. :-)

I strongly oppose renaming it. I don't want Phobos to fall into the trap of javax, which was supposed to be "experimental" but then became unmovable.

std.experimental is much more obvious that you shouldn't expect things to live there forever.

-Steve
August 25, 2015
Am 25.08.2015 um 14:14 schrieb Sebastiaan Koppe:
> On Tuesday, 25 August 2015 at 06:56:23 UTC, Sönke Ludwig wrote:
>> If I have a string variable and I want to store the upper case version
>> of another string, the direct mental translation is "dst =
>> toUpper(src);" - and not "dst = toUpper(src).array;".
>
> One can also say the problem is that you have a string variable.

But ranges are not always the right solution:

- For fields or setter properties, the exact type of the range is fixed, which is generally unpractical
- If the underlying data of a range is stored on the stack or any other transient storage, it cannot be stored on the heap
- If the range is only an input range, it must be copied to an array anyway if it's going to be read multiple times
- Ranges cannot be immutable (no safe slicing or passing between threads)
- If for some reason template land needs to be left, ranges have trouble following (although there are wrapper classes available)
- Most existing APIs are string based
- Re-evaluating a computed range each time a variable is read is usually wasteful

There are probably a bunch of other problems that simply make ranges not the best answer in every situation.
September 24, 2015
So, what is the current status of std.data.json? This topic is almost two month old, what is the result of "two week process"? Wiki page tells nothing except of "ready for comments".

September 25, 2015
On Thursday, 24 September 2015 at 20:44:57 UTC, tired_eyes wrote:
> So, what is the current status of std.data.json? This topic is almost two month old, what is the result of "two week process"? Wiki page tells nothing except of "ready for comments".

I probably should have posted here. Soenke is working on all the comments as far as I know. It'll come back.

Atila
September 27, 2015
Am Mon, 03 Aug 2015 12:11:14 +0300
schrieb Dmitry Olshansky <dmitry.olsh@gmail.com>:

> [...]
>
> Now back to our land let's look at say rapidJSON.
> 
> It MAY seem to handle big integers: https://github.com/miloyip/rapidjson/blob/master/include/rapidjson/internal/biginteger.h
> 
> But it's used only to parse doubles: https://github.com/miloyip/rapidjson/pull/137
> 
> Anyhow the API says it all - only integers up to 64bit and doubles:
> 
> http://rapidjson.org/md_doc_sax.html#Handler
> 
> Pretty much what I expect by default.
> And plz-plz don't hardcode BitInteger in JSON parser, it's slow plus it
> causes epic code bloat as Don already pointed out.

I would take RapidJSON with a grain of salt, its main goal is
to be the fastest JSON parser. Nothing wrong with that, but
BigInt and fast doesn't naturally match and the C standard
library also doesn't come with a BigInt type that could
conveniently be plugged in.
Please compare again with JSON parsers in languages that
provide BigInts, e.g. Ruby:
http://ruby-doc.org/stdlib-1.9.3/libdoc/json/rdoc/JSON/Ext/Generator/GeneratorMethods/Bignum.html
Optional ok, but no support at all would be so 90s.

My impression is that the standard wants to allow JSON being
used in environments that cannot provide BigInt support, but a
modern language for PCs with a BigInt module should totally
support reading long integers and be able to do proper
rounding of double values. I thought about reading two
BigInts: one for the significand and one for the
base-10 exponent, so you don't need a BigFloat but have the
full accuracy from the textual string still as x*10^y.

-- 
Marco

September 27, 2015
On 27-Sep-2015 20:43, Marco Leise wrote:
> Am Mon, 03 Aug 2015 12:11:14 +0300
> schrieb Dmitry Olshansky <dmitry.olsh@gmail.com>:
>
>> [...]
>>
>> Now back to our land let's look at say rapidJSON.
>>
>> It MAY seem to handle big integers:
>> https://github.com/miloyip/rapidjson/blob/master/include/rapidjson/internal/biginteger.h
>>
>> But it's used only to parse doubles:
>> https://github.com/miloyip/rapidjson/pull/137
>>
>> Anyhow the API says it all - only integers up to 64bit and doubles:
>>
>> http://rapidjson.org/md_doc_sax.html#Handler
>>
>> Pretty much what I expect by default.
>> And plz-plz don't hardcode BitInteger in JSON parser, it's slow plus it
>> causes epic code bloat as Don already pointed out.
>
> I would take RapidJSON with a grain of salt, its main goal is
> to be the fastest JSON parser. Nothing wrong with that, but
> BigInt and fast doesn't naturally match and the C standard
> library also doesn't come with a BigInt type that could
> conveniently be plugged in.

Yes, yet support should be optional.

> Please compare again with JSON parsers in languages that
> provide BigInts, e.g. Ruby:
> http://ruby-doc.org/stdlib-1.9.3/libdoc/json/rdoc/JSON/Ext/Generator/GeneratorMethods/Bignum.html
> Optional ok, but no support at all would be so 90s.

Agreed. Still keep in mind the whole reason that Ruby supports it is because its "integer" type is multi-precision by default. So if your native integer type is multi-precision than indeed why add a special case for fixnums.

> My impression is that the standard wants to allow JSON being
> used in environments that cannot provide BigInt support, but a
> modern language for PCs with a BigInt module should totally
> support reading long integers and be able to do proper
> rounding of double values. I thought about reading two
> BigInts: one for the significand and one for the
> base-10 exponent, so you don't need a BigFloat but have the
> full accuracy from the textual string still as x*10^y.
>

All of that is sensible ... in the slow code path. The common path must be simple and lean, bigints are certainly an exception rather then the rule. Therefore support for big int should not come at the expense for other use cases. Also - pluggability should allow me to e.g. use my own "big" decimal floating point.


-- 
Dmitry Olshansky