www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - ubyte[4] to int

reply Kyle <kyle kyle.kyle> writes:
Hi. Is there a convenient way to convert a ubyte[4] into a signed 
int? I'm having trouble handling the static arrays returned by 
std.bitmanip.nativeToLittleEndian. Is there some magic sauce to 
make the static arrays into input ranges or something? As a side 
note, I'm used to using D on Linux and DMD's error messages on 
Windows are comparably terrible. Thanks!
Feb 15 2018
next sibling parent reply Nicholas Wilson <iamthewilsonator hotmail.com> writes:
On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
 Hi. Is there a convenient way to convert a ubyte[4] into a 
 signed int? I'm having trouble handling the static arrays 
 returned by std.bitmanip.nativeToLittleEndian. Is there some 
 magic sauce to make the static arrays into input ranges or 
 something? As a side note, I'm used to using D on Linux and 
 DMD's error messages on Windows are comparably terrible. Thanks!
you mean you want to convert the bitpattern represented by the uint[4] to an int? You want a reinterpret style case ubyte[4] foo = ...; int baz = *cast(int*)&foo;
Feb 15 2018
next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
Nicholas Wilson wrote:

 On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
 Hi. Is there a convenient way to convert a ubyte[4] into a signed int? 
 I'm having trouble handling the static arrays returned by 
 std.bitmanip.nativeToLittleEndian. Is there some magic sauce to make the 
 static arrays into input ranges or something? As a side note, I'm used 
 to using D on Linux and DMD's error messages on Windows are comparably 
 terrible. Thanks!
you mean you want to convert the bitpattern represented by the uint[4] to an int? You want a reinterpret style case ubyte[4] foo = ...; int baz = *cast(int*)&foo;
better to use `&foo[0]`, this way it will work with slices too.
Feb 15 2018
parent Kyle <kyle kyle.kyle> writes:
On Thursday, 15 February 2018 at 17:25:15 UTC, ketmar wrote:
 Nicholas Wilson wrote:

 On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
 Hi. Is there a convenient way to convert a ubyte[4] into a 
 signed int? I'm having trouble handling the static arrays 
 returned by std.bitmanip.nativeToLittleEndian. Is there some 
 magic sauce to make the static arrays into input ranges or 
 something? As a side note, I'm used to using D on Linux and 
 DMD's error messages on Windows are comparably terrible. 
 Thanks!
you mean you want to convert the bitpattern represented by the uint[4] to an int? You want a reinterpret style case ubyte[4] foo = ...; int baz = *cast(int*)&foo;
better to use `&foo[0]`, this way it will work with slices too.
You guys got me working, thanks!
Feb 15 2018
prev sibling parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, February 15, 2018 17:21:22 Nicholas Wilson via Digitalmars-d-
learn wrote:
 On Thursday, 15 February 2018 at 16:51:05 UTC, Kyle wrote:
 Hi. Is there a convenient way to convert a ubyte[4] into a
 signed int? I'm having trouble handling the static arrays
 returned by std.bitmanip.nativeToLittleEndian. Is there some
 magic sauce to make the static arrays into input ranges or
 something? As a side note, I'm used to using D on Linux and
 DMD's error messages on Windows are comparably terrible. Thanks!
you mean you want to convert the bitpattern represented by the uint[4] to an int? You want a reinterpret style case ubyte[4] foo = ...; int baz = *cast(int*)&foo;
Yeah, though that loses all of the endianness benefits of std.bitmanip, and there's no reason why std.bitmanip couldn't be used to convert from ubyte[4] to int or vice versa. It's just a question of understanding what he's trying to do exactly, since it sounds like he's confused by the API. - Jonathan M Davis
Feb 15 2018
prev sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, February 15, 2018 16:51:05 Kyle via Digitalmars-d-learn wrote:
 Hi. Is there a convenient way to convert a ubyte[4] into a signed
 int? I'm having trouble handling the static arrays returned by
 std.bitmanip.nativeToLittleEndian. Is there some magic sauce to
 make the static arrays into input ranges or something? As a side
 note, I'm used to using D on Linux and DMD's error messages on
 Windows are comparably terrible. Thanks!
What are you trying to do exactly? nativeToLittleEndian is going to convert an integral type such as an int to little endian (presumably for something like serialization). It's not going to convert to int. It converts _from_ int. If you're trying to convert a ubyte[] to int, you'd use littleEndianToNative or bigEndianToNative, depending on where the data comes from. You pass it a static array of the size which matches the target type (so ubyte[4] for int]). I don't remember if slicing a dynamic array to passi it works or not (if it does, you have to slice it at the call site), but a cast to a static array would work if simply slicing it doesn't. If you're trying to convert from int to ubyte[], then you'd use nativeToLittleEndian or nativeToBigEndian, depending on which endianness you need. They take an integral type and give you a static array of ubyte whose size matches the integral type. Alternatively, if you're trying to deal with a range of ubytes, then read and peek can be used to get integral types from a range of ubytes, and write and append can be used to put them in a dynamic array or an output range of ubytes. - Jonathan M Davis
Feb 15 2018
parent reply Kyle <kyle kyle.kyle> writes:
On Thursday, 15 February 2018 at 17:43:10 UTC, Jonathan M Davis 
wrote:
 On Thursday, February 15, 2018 16:51:05 Kyle via 
 Digitalmars-d-learn wrote:
 Hi. Is there a convenient way to convert a ubyte[4] into a 
 signed int? I'm having trouble handling the static arrays 
 returned by std.bitmanip.nativeToLittleEndian. Is there some 
 magic sauce to make the static arrays into input ranges or 
 something? As a side note, I'm used to using D on Linux and 
 DMD's error messages on Windows are comparably terrible. 
 Thanks!
What are you trying to do exactly? nativeToLittleEndian is going to convert an integral type such as an int to little endian (presumably for something like serialization). It's not going to convert to int. It converts _from_ int. If you're trying to convert a ubyte[] to int, you'd use littleEndianToNative or bigEndianToNative, depending on where the data comes from. You pass it a static array of the size which matches the target type (so ubyte[4] for int]). I don't remember if slicing a dynamic array to passi it works or not (if it does, you have to slice it at the call site), but a cast to a static array would work if simply slicing it doesn't. If you're trying to convert from int to ubyte[], then you'd use nativeToLittleEndian or nativeToBigEndian, depending on which endianness you need. They take an integral type and give you a static array of ubyte whose size matches the integral type. Alternatively, if you're trying to deal with a range of ubytes, then read and peek can be used to get integral types from a range of ubytes, and write and append can be used to put them in a dynamic array or an output range of ubytes. - Jonathan M Davis
I want to be able to pass an int to a function, then in the function ensure that the int is little-endian (whether it starts out that way or needs to be converted) before additional stuff is done to the passed int. The end goal is compliance with a remote console protocol that expects a little-endian 32-bit signed integer as part of a packet. What I'm trying to achieve is to ensure that an int is in little-endiannes
Feb 15 2018
next sibling parent Kyle <kyle kyle.kyle> writes:
"What I'm trying to achieve is to ensure that an int is in
little-endiannes"

Ignore that last part, whoops.
Feb 15 2018
prev sibling next sibling parent reply Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, February 15, 2018 17:53:54 Kyle via Digitalmars-d-learn wrote:
 I want to be able to pass an int to a function, then in the
 function ensure that the int is little-endian (whether it starts
 out that way or needs to be converted) before additional stuff is
 done to the passed int. The end goal is compliance with a remote
 console protocol that expects a little-endian 32-bit signed
 integer as part of a packet.
Well, in the general case, you can't actually test whether an integer is little endian or not, though if you know that it's only allowed to be within a specific range of values, I suppose that you could infer which it is. And normally, whether a value is little endian or big endian is supposed to be well-defined by where it's used, but if you do have some rare case where that's not true, then it could interesting. That's why UTF-16 files are supposed to have BOMs. Either way, there's nothing in std.bitmanip geared towards guessing the endianness of an integral value. It's all based on the idea that an integral value is in the native endianness of the system and that the application knows whether a ubyte[n] contains bytes arranged as little endian or big endian. - Jonathan M Davis
Feb 15 2018
parent reply Kyle <kyle kyle.kyle> writes:
On Thursday, 15 February 2018 at 18:30:57 UTC, Jonathan M Davis 
wrote:
 On Thursday, February 15, 2018 17:53:54 Kyle via 
 Digitalmars-d-learn wrote:
 I want to be able to pass an int to a function, then in the 
 function ensure that the int is little-endian (whether it 
 starts out that way or needs to be converted) before 
 additional stuff is done to the passed int. The end goal is 
 compliance with a remote console protocol that expects a 
 little-endian 32-bit signed integer as part of a packet.
Well, in the general case, you can't actually test whether an integer is little endian or not, though if you know that it's only allowed to be within a specific range of values, I suppose that you could infer which it is. And normally, whether a value is little endian or big endian is supposed to be well-defined by where it's used, but if you do have some rare case where that's not true, then it could interesting. That's why UTF-16 files are supposed to have BOMs. Either way, there's nothing in std.bitmanip geared towards guessing the endianness of an integral value. It's all based on the idea that an integral value is in the native endianness of the system and that the application knows whether a ubyte[n] contains bytes arranged as little endian or big endian. - Jonathan M Davis
I was thinking that the client could determine its own endianness and either convert the passed int to the other if big, or leave it alone if little, then send it to the server as little-endian at that point. Regardless, I just came across a vibe packaged RCON library by Benjamin Schaaf that may work for me, so that's the new plan, for now. All you guys helping people on the forums daily are awesome, it's still amazing to me that I can ask questions here and routinely get answers directly from core language contributors and D book authors. Thanks for what you do.
Feb 15 2018
parent Jonathan M Davis <newsgroup.d jmdavisprog.com> writes:
On Thursday, February 15, 2018 18:47:16 Kyle via Digitalmars-d-learn wrote:
 I was thinking that the client could determine its own endianness
 and either convert the passed int to the other if big, or leave
 it alone if little, then send it to the server as little-endian
 at that point.
nativeToBigEndian and nativeToLittleEndian convert integral values to the target endianness, taking the native endianness into account. So, if what you want to do is take an int and convert it to ubyte[4], then both of those functions will do that for you. It's just a question of what the target endianness is either. Either way, the endianness of the machine itself will be properly taken into account when that conversion is done, and you don't have to worry about it. - Jonathan M Davis
Feb 15 2018
prev sibling parent =?UTF-8?Q?Ali_=c3=87ehreli?= <acehreli yahoo.com> writes:
On 02/15/2018 09:53 AM, Kyle wrote:

 I want to be able to pass an int to a function, then in the function
 ensure that the int is little-endian (whether it starts out that way or
 needs to be converted) before additional stuff is done to the passed
 int.
As has been said elsewhere, the value of an int is just that value. The value does not have endianness. Yes, different CPUs layout values differently in memory but that has nothing with your problem below.
 The end goal is compliance with a remote console protocol that
 expects a little-endian 32-bit signed integer as part of a packet.
So, they want the value to be represented as 4 bytes in little endian ordering. I think all you need to do is to call nativeToLittleEndian: https://dlang.org/phobos/std_bitmanip.html#nativeToLittleEndian If your CPU is already little-endian, it's a no-op. If not, the bytes would be swapped accordingly: import std.stdio; import std.bitmanip; void main() { auto i = 42; auto result = nativeToLittleEndian(i); foreach (b; result) { writefln("%02x", b); } // Note: The bytes of i may have been swapped writeln("May not be 42, and that's ok: ", i); } Prints the following on my Intel CPU: 2a 00 00 00 May not be 42, and that's ok: 42 Ali
Feb 15 2018