August 05, 2014
On Monday, 4 August 2014 at 16:58:12 UTC, Andrei Alexandrescu wrote:
> On 8/4/14, 12:47 AM, Andrea Fontana wrote:
>> On my bson library I found very useful to have some methods to know if a
>> field exists or not, and to get a "defaulted" value. Something like:
>>
>> auto assume(T)(Value v, T default = T.init);
>
> Nice. Probably "get" would be better to be in keep with built-in hashtables.

I wrote assume just to use proposed syntax :)

>> Another good method could be something like xpath to get a deep value:
>>
>> Value v = value["/path/to/sub/object"];
>
> Cool. Is it unlikely that a value contains an actual slash? If so would be value["path"]["to"]["sub"]["object"] more precise?

Key with a slash (or dot?) inside is not common at all. Never seen on json data.

In many languages there're libraries to bind json to struct or objects so usually people doesn't use strange chars inside key. If needed you can still use old good method to read a single field.

value["path"]["to"]["object"] was my first choice but i didn't like it.

First: it create a lot of temporary objects.

Second: it is easier to implement using a single string (also on assignment)

I gave it a try with value["path", "to", "index"] but it's not confortable if you need to generate your path from code.


>> Moreover in my library I actually have three different methods to read a
>> value:
>>
>> T get(T)() // Exception if value is not a T or not valid or value
>> doesn't exist
>> T to(T)()  // Try to convert value to T using to!string. Exception if
>> doesn't exists or not valid
>>
>> BsonField!T as(T)(lazy T default = T.init)  // Always return a value
>>
>> BsonField!T is an "alias this"-ed struct with two fields: T value and
>> bool error(). T value is the aliased field, and error() tells you if
>> value is defaulted (because of an error: field not exists or can't
>> convert to T)
>>
>> So I can write something like this:
>>
>> int myvalue = json["/that/deep/property"].as!int;
>>
>> or
>>
>> auto myvalue = json["/that/deep/property"].as!int(10);
>>
>> if (myvalue.error) writeln("Property doesn't exists, I'm using default
>> value);
>>
>> writeln("Property value: ", myvalue);
>>
>> I hope this can be useful...
>
> Sure is, thanks. Listen, would you want to volunteer a std.data.json proposal?
>

What does it mean? :)

>
> Andrei

August 05, 2014
Am 04.08.2014 20:38, schrieb Jacob Carlborg:
> On 2014-08-04 16:55, Dicebot wrote:
>
>> That is exactly the problem - if `structToJson` won't be provided,
>> complaints are inevitable, it is too basic feature to wait for
>> std.serialization :(
>
> Hmm, yeah, that's a problem.

On the other hand, a simplistic solution will inevitably result in people needing more. And when at some point a serialization module is in Phobos, there will be duplicate functionality in the library.

>> I am pretty sure that this is not the only optimized serialization
>> approach out there that does not fit in a content-insensitive
>> primitive-based traversal scheme. And we won't Phobos stuff to be
>> blazingly fast which can lead to situation where new data module will
>> circumvent the std.serialization API to get more performance.
>
> I don't like the idea of having to reimplement serialization for each
> data type that can be generalized.
>

I think we could also simply keep the generic default recursive descent behavior, but allow serializers to customize the process using some kind of trait. This could even be added later in a backwards compatible fashion if necessary.

BTW, how is the progress for Orange w.r.t. to the conversion to a more template+allocation-less approach, is a new std proposal within the next DMD release cycle realistic?

I quite like most of how vibe.data.serialization turned out, but it can't do any alias detection/deduplication (and I have no concrete plans to add support for that), which is why I currently wouldn't consider it as a potential Phobos candidate.
August 05, 2014
On Tuesday, 5 August 2014 at 09:54:42 UTC, Sönke Ludwig wrote:
> I think we could also simply keep the generic default recursive descent behavior, but allow serializers to customize the process using some kind of trait. This could even be added later in a backwards compatible fashion if necessary.

Simple option is to define required serializer traits and make both std.serialization default and any custom data-specific ones conform it.
August 05, 2014
"Jacob Carlborg"  wrote in message news:kvuaxyxjwmpqrorlozrz@forum.dlang.org...

> > This is exactly what I need in most projects.  Basic types, arrays, AAs, and structs are usually enough.
>
> I was more thinking only types that cannot be broken down in to smaller pieces, i.e. integer, floating point, bool and string. The serializer would break down the other types in to smaller pieces.

I guess I meant types that have an obvious mapping to json types.

int/long -> json integer
bool -> json bool
string -> json string
float/real -> json float (close enough)
T[] -> json array
T[string] -> json object
struct -> json object

This is usually enough for config and data files.  Being able to do this is just awesome:

struct AppConfig
{
   string somePath;
   bool someOption;
   string[] someList;
   string[string] someMap;
}

void main()
{
   auto config = "config.json".readText().parseJSON().fromJson!AppConfig();
}

Being able to serialize whole graphs into json is something I need much less often. 

August 05, 2014
On Tuesday, 5 August 2014 at 12:40:25 UTC, Daniel Murphy wrote:
> "Jacob Carlborg"  wrote in message news:kvuaxyxjwmpqrorlozrz@forum.dlang.org...
>
>> > This is exactly what I need in most projects.  Basic types, arrays, AAs, and structs are usually enough.
>>
>> I was more thinking only types that cannot be broken down in to smaller pieces, i.e. integer, floating point, bool and string. The serializer would break down the other types in to smaller pieces.
>
> I guess I meant types that have an obvious mapping to json types.
>
> int/long -> json integer
> bool -> json bool
> string -> json string
> float/real -> json float (close enough)
> T[] -> json array
> T[string] -> json object
> struct -> json object
>
> This is usually enough for config and data files.  Being able to do this is just awesome:
>
> struct AppConfig
> {
>    string somePath;
>    bool someOption;
>    string[] someList;
>    string[string] someMap;
> }
>
> void main()
> {
>    auto config = "config.json".readText().parseJSON().fromJson!AppConfig();
> }
>
> Being able to serialize whole graphs into json is something I need much less often.

If I'm right, json has just one numeric type. No difference between integers / float and no limits.

So probably the mapping is:

float/double/real/int/long => number



August 05, 2014
On 8/5/14, 2:08 AM, Andrea Fontana wrote:
>> Sure is, thanks. Listen, would you want to volunteer a std.data.json
>> proposal?
>>
>
> What does it mean? :)

One one side enters vibe.data.json with the deltas prompted by std.jgrandson plus your talent and determination, and on the other side comes std.data.json with code and documentation that passes the Phobos review process. -- Andrei

August 05, 2014
"Andrea Fontana"  wrote in message news:takluoqmlmmooxlovqya@forum.dlang.org...

> If I'm right, json has just one numeric type. No difference between integers / float and no limits.
>
> So probably the mapping is:
>
> float/double/real/int/long => number

Maybe, but std.json has three numeric types. 

August 05, 2014
On 2014-08-05 14:40, Daniel Murphy wrote:

> I guess I meant types that have an obvious mapping to json types.
>
> int/long -> json integer
> bool -> json bool
> string -> json string
> float/real -> json float (close enough)
> T[] -> json array
> T[string] -> json object
> struct -> json object
>
> This is usually enough for config and data files.  Being able to do this
> is just awesome:
>
> struct AppConfig
> {
>     string somePath;
>     bool someOption;
>     string[] someList;
>     string[string] someMap;
> }
>
> void main()
> {
>     auto config =
> "config.json".readText().parseJSON().fromJson!AppConfig();
> }

I'm not saying that is a bad idea or that I don't want to be able to do this. I just prefer this to be handled by a generic serialization module. Which can of course handle the simple cases, like above, as well.

-- 
/Jacob Carlborg
August 05, 2014
On 2014-08-05 11:54, Sönke Ludwig wrote:

> I think we could also simply keep the generic default recursive descent
> behavior, but allow serializers to customize the process using some kind
> of trait. This could even be added later in a backwards compatible
> fashion if necessary.

I have a very flexible trait like system in place. This allows to configure the serializer based on the given archiver and user customizations. To avoid having the serializer do unnecessary work which the archiver cannot handle.

> BTW, how is the progress for Orange w.r.t. to the conversion to a more
> template+allocation-less approach

Slowly. I think the range support in the serializer is basically complete. But the deserializer isn't done yet. I would also like to provide, at least, one additional archiver type besides XML. BTW std.xml doesn't make it any easier to rangify the serializer.

I've been focusing on D/Objective-C lately, which I think is in a more complete state than std.serialization. I would really like to get it done and create a pull request so I can get back to std.serialization. But I always get stuck after a merge with something breaking. With the summer and vacations I haven't been able to work that much on D at all.

, is a new std proposal within the next
> DMD release cycle realistic?

Probably not.

> I quite like most of how vibe.data.serialization turned out, but it
> can't do any alias detection/deduplication (and I have no concrete plans
> to add support for that), which is why I currently wouldn't consider it
> as a potential Phobos candidate.

I'm quite satisfied with the feature support and flexibility of Orange/std.serialization. With the new trait like system it will be even more flexible.

-- 
/Jacob Carlborg
August 05, 2014
Am 03.08.2014 21:53, schrieb Andrei Alexandrescu:
>
> What would be your estimated time of finishing?
>

My rough estimate would be that about two weeks of calender time should suffice for a first candidate, since the functionality and the design is already mostly there. However, it seems that VariantN will need some work, too (currently using opAdd results in an error for an Algebraic defined for JSON usage).