Thread overview | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
February 15, 2018 ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks! |
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kyle | On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
> Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks!
you mean you want to convert the bitpattern represented by the uint[4] to an int?
You want a reinterpret style case
ubyte[4] foo = ...;
int baz = *cast(int*)&foo;
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Nicholas Wilson | Nicholas Wilson wrote:
> On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
>> Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks!
>
> you mean you want to convert the bitpattern represented by the uint[4] to an int?
>
> You want a reinterpret style case
>
> ubyte[4] foo = ...;
> int baz = *cast(int*)&foo;
better to use `&foo[0]`, this way it will work with slices too.
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to ketmar | On Thursday, 15 February 2018 at 17:25:15 UTC, ketmar wrote:
> Nicholas Wilson wrote:
>
>> On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
>>> Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks!
>>
>> you mean you want to convert the bitpattern represented by the uint[4] to an int?
>>
>> You want a reinterpret style case
>>
>> ubyte[4] foo = ...;
>> int baz = *cast(int*)&foo;
>
> better to use `&foo[0]`, this way it will work with slices too.
You guys got me working, thanks!
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kyle | On Thursday, February 15, 2018 16:51:05 Kyle via Digitalmars-d-learn wrote:
> Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks!
What are you trying to do exactly? nativeToLittleEndian is going to convert an integral type such as an int to little endian (presumably for something like serialization). It's not going to convert to int. It converts _from_ int.
If you're trying to convert a ubyte[] to int, you'd use littleEndianToNative or bigEndianToNative, depending on where the data comes from. You pass it a static array of the size which matches the target type (so ubyte[4] for int]). I don't remember if slicing a dynamic array to passi it works or not (if it does, you have to slice it at the call site), but a cast to a static array would work if simply slicing it doesn't.
If you're trying to convert from int to ubyte[], then you'd use nativeToLittleEndian or nativeToBigEndian, depending on which endianness you need. They take an integral type and give you a static array of ubyte whose size matches the integral type.
Alternatively, if you're trying to deal with a range of ubytes, then read and peek can be used to get integral types from a range of ubytes, and write and append can be used to put them in a dynamic array or an output range of ubytes.
- Jonathan M Davis
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Nicholas Wilson | On Thursday, February 15, 2018 17:21:22 Nicholas Wilson via Digitalmars-d- learn wrote:
> On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
> > Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks!
>
> you mean you want to convert the bitpattern represented by the uint[4] to an int?
>
> You want a reinterpret style case
>
> ubyte[4] foo = ...;
> int baz = *cast(int*)&foo;
Yeah, though that loses all of the endianness benefits of std.bitmanip, and there's no reason why std.bitmanip couldn't be used to convert from ubyte[4] to int or vice versa. It's just a question of understanding what he's trying to do exactly, since it sounds like he's confused by the API.
- Jonathan M Davis
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Jonathan M Davis | On Thursday, 15 February 2018 at 17:43:10 UTC, Jonathan M Davis wrote:
> On Thursday, February 15, 2018 16:51:05 Kyle via Digitalmars-d-learn wrote:
>> Hi. Is there a convenient way to convert a ubyte[4] into a signed int? I'm having trouble handling the static arrays returned by std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the static arrays into input ranges or something? As a side note, I'm used to using D on Linux and DMD's error messages on Windows are comparably terrible. Thanks!
>
> What are you trying to do exactly? nativeToLittleEndian is going to convert an integral type such as an int to little endian (presumably for something like serialization). It's not going to convert to int. It converts _from_ int.
>
> If you're trying to convert a ubyte[] to int, you'd use littleEndianToNative or bigEndianToNative, depending on where the data comes from. You pass it a static array of the size which matches the target type (so ubyte[4] for int]). I don't remember if slicing a dynamic array to passi it works or not (if it does, you have to slice it at the call site), but a cast to a static array would work if simply slicing it doesn't.
>
> If you're trying to convert from int to ubyte[], then you'd use nativeToLittleEndian or nativeToBigEndian, depending on which endianness you need. They take an integral type and give you a static array of ubyte whose size matches the integral type.
>
> Alternatively, if you're trying to deal with a range of ubytes, then read and peek can be used to get integral types from a range of ubytes, and write and append can be used to put them in a dynamic array or an output range of ubytes.
>
> - Jonathan M Davis
I want to be able to pass an int to a function, then in the function ensure that the int is little-endian (whether it starts out that way or needs to be converted) before additional stuff is done to the passed int. The end goal is compliance with a remote console protocol that expects a little-endian 32-bit signed integer as part of a packet.
What I'm trying to achieve is to ensure that an int is in little-endiannes
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kyle | "What I'm trying to achieve is to ensure that an int is in little-endiannes" Ignore that last part, whoops. |
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kyle | On Thursday, February 15, 2018 17:53:54 Kyle via Digitalmars-d-learn wrote:
> I want to be able to pass an int to a function, then in the function ensure that the int is little-endian (whether it starts out that way or needs to be converted) before additional stuff is done to the passed int. The end goal is compliance with a remote console protocol that expects a little-endian 32-bit signed integer as part of a packet.
Well, in the general case, you can't actually test whether an integer is little endian or not, though if you know that it's only allowed to be within a specific range of values, I suppose that you could infer which it is. And normally, whether a value is little endian or big endian is supposed to be well-defined by where it's used, but if you do have some rare case where that's not true, then it could interesting. That's why UTF-16 files are supposed to have BOMs.
Either way, there's nothing in std.bitmanip geared towards guessing the endianness of an integral value. It's all based on the idea that an integral value is in the native endianness of the system and that the application knows whether a ubyte[n] contains bytes arranged as little endian or big endian.
- Jonathan M Davis
|
February 15, 2018 Re: ubyte[4] to int | ||||
---|---|---|---|---|
| ||||
Posted in reply to Kyle | On 02/15/2018 09:53 AM, Kyle wrote: > I want to be able to pass an int to a function, then in the function > ensure that the int is little-endian (whether it starts out that way or > needs to be converted) before additional stuff is done to the passed > int. As has been said elsewhere, the value of an int is just that value. The value does not have endianness. Yes, different CPUs layout values differently in memory but that has nothing with your problem below. > The end goal is compliance with a remote console protocol that > expects a little-endian 32-bit signed integer as part of a packet. So, they want the value to be represented as 4 bytes in little endian ordering. I think all you need to do is to call nativeToLittleEndian: https://dlang.org/phobos/std_bitmanip.html#nativeToLittleEndian If your CPU is already little-endian, it's a no-op. If not, the bytes would be swapped accordingly: import std.stdio; import std.bitmanip; void main() { auto i = 42; auto result = nativeToLittleEndian(i); foreach (b; result) { writefln("%02x", b); } // Note: The bytes of i may have been swapped writeln("May not be 42, and that's ok: ", i); } Prints the following on my Intel CPU: 2a 00 00 00 May not be 42, and that's ok: 42 Ali |
Copyright © 1999-2021 by the D Language Foundation