www.digitalmars.com         C & C++   DMDScript  

digitalmars.D.learn - extern(C) enum

reply bitwise <bitwise.pvt gmail.com> writes:
I translated the headers for FreeType2 to D, and in many cases, 
enums are used as struct members.

If I declare an extern(C) enum in D, is it guaranteed to have the 
same underlying type and size as it would for a C compiler on the 
same platform?
Sep 14
next sibling parent reply rikki cattermole <rikki cattermole.co.nz> writes:
On 15/09/2017 5:15 AM, bitwise wrote:
 I translated the headers for FreeType2 to D, and in many cases, enums 
 are used as struct members.
 
 If I declare an extern(C) enum in D, is it guaranteed to have the same 
 underlying type and size as it would for a C compiler on the same platform?
No need for extern(C). Be as specific as you need, but most likely you won't need to (e.g. first is automatically 0). enum Foo : int { Start = 0, StuffHere End }
Sep 14
parent bitwise <bitwise.pvt gmail.com> writes:
On Friday, 15 September 2017 at 06:57:31 UTC, rikki cattermole 
wrote:
 On 15/09/2017 5:15 AM, bitwise wrote:
 I translated the headers for FreeType2 to D, and in many 
 cases, enums are used as struct members.
 
 If I declare an extern(C) enum in D, is it guaranteed to have 
 the same underlying type and size as it would for a C compiler 
 on the same platform?
No need for extern(C). Be as specific as you need, but most likely you won't need to (e.g. first is automatically 0). enum Foo : int { Start = 0, StuffHere End }
This is for D/C interop though. enum E { A, B, C } struct S { E e; } So based on the underlying type chosen by each compiler, the size of struct S could change. I can't strongly type the D enums to match, because I don't know what size the C compiler will make 'E', unless D somehow gauntness the same enum-sizing as the C compiler would.
Sep 15
prev sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Friday, September 15, 2017 04:15:57 bitwise via Digitalmars-d-learn 
wrote:
 I translated the headers for FreeType2 to D, and in many cases,
 enums are used as struct members.

 If I declare an extern(C) enum in D, is it guaranteed to have the
 same underlying type and size as it would for a C compiler on the
 same platform?
extern(C) should have no effect on enums. It's for function linkage, and enums don't even have an address, so they don't actually end up in the program as a symbol. And since C's int and D's int are the same on all platforms that D supports (we'd have c_int otherwise, like we have c_long), any enum with a base type of int (which is the default) will match what's in C. - Jonathan M Davis
Sep 15
parent reply bitwise <bitwise.pvt gmail.com> writes:
On Friday, 15 September 2017 at 07:24:34 UTC, Jonathan M Davis 
wrote:
 On Friday, September 15, 2017 04:15:57 bitwise via 
 Digitalmars-d-learn wrote:
 I translated the headers for FreeType2 to D, and in many 
 cases, enums are used as struct members.

 If I declare an extern(C) enum in D, is it guaranteed to have 
 the same underlying type and size as it would for a C compiler 
 on the same platform?
extern(C) should have no effect on enums. It's for function linkage, and enums don't even have an address, so they don't actually end up in the program as a symbol. And since C's int and D's int are the same on all platforms that D supports (we'd have c_int otherwise, like we have c_long), any enum with a base type of int (which is the default) will match what's in C. - Jonathan M Davis
I'm confused...is it only C++ that has implementation defined enum size? I thought that was C as well.
Sep 15
next sibling parent reply Jonathan M Davis via Digitalmars-d-learn writes:
On Friday, September 15, 2017 15:35:48 bitwise via Digitalmars-d-learn 
wrote:
 On Friday, 15 September 2017 at 07:24:34 UTC, Jonathan M Davis

 wrote:
 On Friday, September 15, 2017 04:15:57 bitwise via

 Digitalmars-d-learn wrote:
 I translated the headers for FreeType2 to D, and in many
 cases, enums are used as struct members.

 If I declare an extern(C) enum in D, is it guaranteed to have
 the same underlying type and size as it would for a C compiler
 on the same platform?
extern(C) should have no effect on enums. It's for function linkage, and enums don't even have an address, so they don't actually end up in the program as a symbol. And since C's int and D's int are the same on all platforms that D supports (we'd have c_int otherwise, like we have c_long), any enum with a base type of int (which is the default) will match what's in C. - Jonathan M Davis
I'm confused...is it only C++ that has implementation defined enum size? I thought that was C as well.
It is my understanding that for both C and C++, an enum is always an int (unless you're talking about enum classes in C++). The size of an int can change based on your architecture, but AFAIK, all of the architectures supported by D guarantee it it be 32 bits in C/C++ (certainly, all of the architectures supported by dmd do), and druntime would have serious issues if it were otherwise, as it assumes all of the place that D's int is the same as C/C++'s int. It's certainly possible that my understanding of C/C++ enums is wrong, but if it is, you'd basically be screwed when dealing with any C functions that take an enum in any case that an enum wasn't 32 bits - especially if the C/C++ compiler could choose whatever size it wanted that fit the values. - Jonathan M Davis
Sep 15
parent reply jmh530 <john.michael.hall gmail.com> writes:
On Friday, 15 September 2017 at 18:20:06 UTC, Jonathan M Davis 
wrote:
 It is my understanding that for both C and C++, an enum is 
 always an int (unless you're talking about enum classes in 
 C++). The size of an int can change based on your architecture, 
 but AFAIK, all of the architectures supported by D guarantee it 
 it be 32 bits in C/C++ (certainly, all of the architectures 
 supported by dmd do), and druntime would have serious issues if 
 it were otherwise, as it assumes all of the place that D's int 
 is the same as C/C++'s int.

 It's certainly possible that my understanding of C/C++ enums is 
 wrong, but if it is, you'd basically be screwed when dealing 
 with any C functions that take an enum in any case that an enum 
 wasn't 32 bits - especially if the C/C++ compiler could choose 
 whatever size it wanted that fit the values.

 - Jonathan M Davis
Not to hijack the thread, but is there anything about enums that can't be done with a struct? The code below is just a simple example that I'm sure I could complicate unnecessarily to re-create much of the behavior of current enums with the syntax of std.tuple. I suppose what I'm wondering how E.B below is treated in the writeln. With an enum, it would be a manifest constant. Does static initialization of the struct do the same thing? struct Enum(T) { T A; T B; } static Enum!int E = {A:0, B:1}; void main() { import std.stdio : writeln; writeln(E.B); }
Sep 15
parent Jonathan M Davis via Digitalmars-d-learn writes:
On Friday, September 15, 2017 19:04:56 jmh530 via Digitalmars-d-learn wrote:
 On Friday, 15 September 2017 at 18:20:06 UTC, Jonathan M Davis

 wrote:
 It is my understanding that for both C and C++, an enum is
 always an int (unless you're talking about enum classes in
 C++). The size of an int can change based on your architecture,
 but AFAIK, all of the architectures supported by D guarantee it
 it be 32 bits in C/C++ (certainly, all of the architectures
 supported by dmd do), and druntime would have serious issues if
 it were otherwise, as it assumes all of the place that D's int
 is the same as C/C++'s int.

 It's certainly possible that my understanding of C/C++ enums is
 wrong, but if it is, you'd basically be screwed when dealing
 with any C functions that take an enum in any case that an enum
 wasn't 32 bits - especially if the C/C++ compiler could choose
 whatever size it wanted that fit the values.

 - Jonathan M Davis
Not to hijack the thread, but is there anything about enums that can't be done with a struct? The code below is just a simple example that I'm sure I could complicate unnecessarily to re-create much of the behavior of current enums with the syntax of std.tuple. I suppose what I'm wondering how E.B below is treated in the writeln. With an enum, it would be a manifest constant. Does static initialization of the struct do the same thing? struct Enum(T) { T A; T B; } static Enum!int E = {A:0, B:1}; void main() { import std.stdio : writeln; writeln(E.B); }
If you do that instead of using enum, you completely lose the extra bit of type safety that enums give you (e.g. assigning a string to a variable whose type is an enum that's a base type of string is not legal) - though the type system is still annoyingly liberal with what it allows for enums (e.g. appending to an enum of base type string is legal, and doing bitwise operations on an enum results in the enum type instead of the base integral type). And final switch wouldn't work with the struct, whereas it does with enums. Also, a number of things in the standard library specifically treat enums in a special way (e.g to!string and writeln use the enum's name rather than it's value), and that would not happen with your struct. With your struct, you've just namespaced a group of constants. Other than that, they're the same as if they were declared outside of the struct, and while IMHO D's enums are annoyingly lax in some of what they allow, they do do more with the type system than a manifest constant would. - Jonathan M Davis
Sep 15
prev sibling parent reply Timothy Foster <timfost aol.com> writes:
On Friday, 15 September 2017 at 15:35:48 UTC, bitwise wrote:
 On Friday, 15 September 2017 at 07:24:34 UTC, Jonathan M Davis 
 wrote:
 On Friday, September 15, 2017 04:15:57 bitwise via 
 Digitalmars-d-learn wrote:
 I translated the headers for FreeType2 to D, and in many 
 cases, enums are used as struct members.

 If I declare an extern(C) enum in D, is it guaranteed to have 
 the same underlying type and size as it would for a C 
 compiler on the same platform?
extern(C) should have no effect on enums. It's for function linkage, and enums don't even have an address, so they don't actually end up in the program as a symbol. And since C's int and D's int are the same on all platforms that D supports (we'd have c_int otherwise, like we have c_long), any enum with a base type of int (which is the default) will match what's in C. - Jonathan M Davis
I'm confused...is it only C++ that has implementation defined enum size? I thought that was C as well.
I believe C enum size is implementation defined. A C compiler can pick the underlying type (1, 2, or 4 bytes, signed or unsigned) that fits the values in the enum. A D int is always the same size as a C int because C ints are 4 bytes on 32bit and above architectures and D doesn't support architectures below 32bit so you never run into a case where a C int is 2 bytes. D can't guarantee that the size of an extern(C) enum will match an arbitrary C compiler's choice, so I'm pretty sure it'll just default to a D int. It's further likely that padding in a struct will differ between C compilers so if you need a D struct to be the same size as a C struct in every case... welp that's not exactly going to be fun.
Sep 15
parent reply nkm1 <t4nk074 openmailbox.org> writes:
On Friday, 15 September 2017 at 19:21:02 UTC, Timothy Foster 
wrote:
 I believe C enum size is implementation defined. A C compiler 
 can pick the underlying type (1, 2, or 4 bytes, signed or 
 unsigned) that fits the values in the enum.
No, at least, not C99. See 6.4.4.3: "An identifier declared as an enumeration constant has type int". You must be thinking about C++.
Sep 15
next sibling parent bitwise <bitwise.pvt gmail.com> writes:
On Friday, 15 September 2017 at 19:35:50 UTC, nkm1 wrote:
 On Friday, 15 September 2017 at 19:21:02 UTC, Timothy Foster 
 wrote:
 I believe C enum size is implementation defined. A C compiler 
 can pick the underlying type (1, 2, or 4 bytes, signed or 
 unsigned) that fits the values in the enum.
No, at least, not C99. See 6.4.4.3: "An identifier declared as an enumeration constant has type int". You must be thinking about C++.
Thanks - this works for me. The bindings are for an open source C library. So I guess I'm safe as long as I can be sure I'm using a C99 compiler and strongly typing as int in D. C++ seems to be a much more complicated situation, but it appears that for 'enum class' or 'enum struct' the underlying type is int, even when it's not specified. ยง 7.2: [1] "The enum-keys enum class and enum struct are semantically equivalent; an enumeration type declared with one of these is a scoped enumeration, and its enumerators are scoped enumerators." [2] "For a scoped enumeration type, the underlying type is int if it is not explicitly specified." [1][2] http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf Shame that even relatively new C++ code tends to use unscoped enums.
Sep 15
prev sibling parent reply Timothy Foster <timfost aol.com> writes:
On Friday, 15 September 2017 at 19:35:50 UTC, nkm1 wrote:
 On Friday, 15 September 2017 at 19:21:02 UTC, Timothy Foster 
 wrote:
 I believe C enum size is implementation defined. A C compiler 
 can pick the underlying type (1, 2, or 4 bytes, signed or 
 unsigned) that fits the values in the enum.
No, at least, not C99. See 6.4.4.3: "An identifier declared as an enumeration constant has type int". You must be thinking about C++.
You are correct, however 6.7.2.2 "Enumeration specifiers" states: "Each enumerated type shall be compatible with char, a signed integer type, or an unsigned integer type. The choice of type is implementation-defined, but shall be capable of representing the values of all the members of the enumeration." I believe that means that if you have the following: enum ABC { A, B, C } Then A, B, and C are by themselves ints, but the enum type ABC can be a char if the compiler decides that's what it wants it to be.
Sep 15
parent reply nkm1 <t4nk074 openmailbox.org> writes:
On Saturday, 16 September 2017 at 03:06:24 UTC, Timothy Foster 
wrote:
 You are correct, however 6.7.2.2 "Enumeration specifiers" 
 states: "Each enumerated type shall be compatible with char, a 
 signed integer type, or an unsigned integer type. The choice of 
 type is implementation-defined, but shall be capable of 
 representing the values of all the members of the enumeration."

 I believe that means that if you have the following:

 enum ABC { A, B, C }

 Then A, B, and C are by themselves ints, but the enum type ABC 
 can be a char if the compiler decides that's what it wants it 
 to be.
Oops, you're right. Then the situation must be the same as in C++? If enum ABC is by itself a parameter of a function, the argument will be int (and if it weren't, it would be promoted to int anyway), but if the enum is a part of a structure, then it can be anything... At least, if enumerators themselves are ints, the enum type probably won't be larger than an int... small consolation :)
Sep 16
parent reply bitwise <bitwise.pvt gmail.com> writes:
On Saturday, 16 September 2017 at 12:34:58 UTC, nkm1 wrote:
 On Saturday, 16 September 2017 at 03:06:24 UTC, Timothy Foster 
 wrote:
 [...]
[...]
So it appears I'm screwed then. Example: typedef enum FT_Size_Request_Type_ { FT_SIZE_REQUEST_TYPE_NOMINAL, FT_SIZE_REQUEST_TYPE_REAL_DIM, FT_SIZE_REQUEST_TYPE_BBOX, FT_SIZE_REQUEST_TYPE_CELL, FT_SIZE_REQUEST_TYPE_SCALES, FT_SIZE_REQUEST_TYPE_MAX } FT_Size_Request_Type; typedef struct FT_Size_RequestRec_ { FT_Size_Request_Type type; FT_Long width; FT_Long height; FT_UInt horiResolution; FT_UInt vertResolution; } FT_Size_RequestRec; FT_Size_Request_Type_ could be represented by char. Maybe the compiler makes it an int, maybe not. Maybe the compiler makes 'FT_Size_Request_Type_' char sized, but then pads 'FT_Size_RequestRec_' to align 'width' to 4 bytes...or maybe not. Maybe a member of 'FT_Size_Request_Type_' sits right before a char or bool in some struct..so can't rely on padding. I don't really see a way to deal with this aside from branching the entire library and inserting something like 'FT_SIZE_REQUEST_TYPE__FORCE_INT = 0xFFFFFFFF' into every enum incase the devs used it in a struct.
Sep 17
parent reply nkm1 <t4nk074 openmailbox.org> writes:
On Sunday, 17 September 2017 at 17:06:10 UTC, bitwise wrote:
 I don't really see a way to deal with this aside from branching 
 the entire library and inserting something like 
 'FT_SIZE_REQUEST_TYPE__FORCE_INT = 0xFFFFFFFF' into every enum 
 incase the devs used it in a struct.
Just put the burden on the users then. It's implementation defined, so they are in position to figure it out... for example, gcc: "Normally, the type is unsigned int if there are no negative values in the enumeration, otherwise int. If -fshort-enums is specified, then if there are negative values it is the first of signed char, short and int that can represent all the values, otherwise it is the first of unsigned char, unsigned short and unsigned int that can represent all the values. On some targets, -fshort-enums is the default; this is determined by the ABI." https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Structures-unions-enumerations-and-bit-fields-implementation.html#Structures-unions-enumerations-and-bit-fields-implementation msvc++: "A variable declared as enum is an int." https://docs.microsoft.com/en-us/cpp/c-language/enum-type It's probably pretty safe to assume it's an int; people who play tricks with "-fshort-enums" deserve what's coming to them :)
Sep 17
parent reply bitwise <bitwise.pvt gmail.com> writes:
On Sunday, 17 September 2017 at 18:44:47 UTC, nkm1 wrote:
 On Sunday, 17 September 2017 at 17:06:10 UTC, bitwise wrote:
 [...]
Just put the burden on the users then. It's implementation defined, so they are in position to figure it out...
This isn't something that can really be done with bindings, which are important for D to start really picking up speed. If someone goes to code.dlang.org and decides to download some FreeType2 bindings, they should just work. The memory corruption bugs that could occur due to binary incompatibility with some random copy of the original C library would be extremely hard to diagnose. They would also undermine the memory safety that a lot of people depend on when using D.
 for example, gcc: "Normally, the type is unsigned int if there 
 are no negative values in the enumeration, otherwise int. If 
 -fshort-enums is specified, then if there are negative values 
 it is the first of signed char, short and int that can 
 represent all the values, otherwise it is the first of unsigned 
 char, unsigned short and unsigned int that can represent all 
 the values. On some targets, -fshort-enums is the default; this 
 is determined by the ABI."
 https://gcc.gnu.org/onlinedocs/gcc-6.4.0/gcc/Structures-unions-enumerations-and-bit-fields-implementation.html#Structures-unions-enumerations-and-bit-fields-implementation

 msvc++: "A variable declared as enum is an int."
 https://docs.microsoft.com/en-us/cpp/c-language/enum-type
I was starting to think along these lines as well. With respect to the above, I'm wondering if something like this could be done: ` template NativeEnumBase(long minValue, long maxValue) { static if(platform A) { static if(minValue < 0) // need signed? { static if(maxValue > int.max) // need long? alias NativeEnumBase = long; else alias NativeEnumBase = int; } else { static if(maxValue > uint.max) // need long? alias NativeEnumBase = ulong; else alias NativeEnumBase = uint; } } else static if(platform B) { // etc... alias NativeEnumBase = long; } else { static assert("unsupported compiler"); } } enum Some_C_Enum_ : NativeEnumBase!(-1, 2) { SCE_INVALID = -1, SCE_ZERO = 0, SCE_ONE = 1, SCE_TWO = 2, } ` So the question is, is there a way from inside D code to determine what the native enum size would be for a given set of min and max enum values? While C and C++ do not specify enum size, are there platform or compiler level specifications we could rely on?
 It's probably pretty safe to assume it's an int; people who 
 play tricks with "-fshort-enums" deserve what's coming to them 
 :)
Agreed ;)
Sep 17
parent reply Mike Parker <aldacron gmail.com> writes:
On Sunday, 17 September 2017 at 19:16:06 UTC, bitwise wrote:
 On Sunday, 17 September 2017 at 18:44:47 UTC, nkm1 wrote:
 On Sunday, 17 September 2017 at 17:06:10 UTC, bitwise wrote:
 [...]
Just put the burden on the users then. It's implementation defined, so they are in position to figure it out...
This isn't something that can really be done with bindings, which are important for D to start really picking up speed. If someone goes to code.dlang.org and decides to download some FreeType2 bindings, they should just work. The memory corruption bugs that could occur due to binary incompatibility with some random copy of the original C library would be extremely hard to diagnose. They would also undermine the memory safety that a lot of people depend on when using D.
I've been maintaining bindings to multiple C libraries (including Freetype 2 bindings) for 13 years now. I have never encountered an issue with an enum size mismatch. That's not to say I never will. I would say it's something you just don't have worry about. If, at some future time, any C compiler on any platform decides to start treating enums as something other than int or uint by default, then we can report a bug for D and fix it.
Sep 17
parent reply bitwise <bitwise.pvt gmail.com> writes:
On Monday, 18 September 2017 at 00:12:49 UTC, Mike Parker wrote:
 On Sunday, 17 September 2017 at 19:16:06 UTC, bitwise wrote:
 [...]
I've been maintaining bindings to multiple C libraries (including Freetype 2 bindings) for 13 years now. I have never encountered an issue with an enum size mismatch. That's not to say I never will.
For which platforms? I would have to actually go through the specs for each compiler of each platform to make sure before I felt comfortable accepting that int-sized enums were defacto standard. I would be worried about iOS, for example. The following code will run fine on Windows, but crash on iOS due to the misaligned access: char data[8]; int i = 0xFFFFFFFF; int* p = (int*)&data[1]; *p++ = i; *p++ = i; *p++ = i; I remember this issue presenting due to a poorly written serializer I used once (no idea who wrote it ;) and it makes me wonder what kind of other subtle differences there may be. I think there may be a few (clang and gcc?) different choices of compiler for Android NDK as well.
Sep 17
next sibling parent Mike Parker <aldacron gmail.com> writes:
On Monday, 18 September 2017 at 02:04:49 UTC, bitwise wrote:
 On Monday, 18 September 2017 at 00:12:49 UTC, Mike Parker wrote:
 On Sunday, 17 September 2017 at 19:16:06 UTC, bitwise wrote:
 [...]
I've been maintaining bindings to multiple C libraries (including Freetype 2 bindings) for 13 years now. I have never encountered an issue with an enum size mismatch. That's not to say I never will.
For which platforms? I would have to actually go through the specs for each compiler of each platform to make sure before I felt comfortable accepting that int-sized enums were defacto standard. I would be worried about iOS, for example. The following code will run fine on Windows, but crash on iOS due to the misaligned access: char data[8]; int i = 0xFFFFFFFF; int* p = (int*)&data[1]; *p++ = i; *p++ = i; *p++ = i; I remember this issue presenting due to a poorly written serializer I used once (no idea who wrote it ;) and it makes me wonder what kind of other subtle differences there may be. I think there may be a few (clang and gcc?) different choices of compiler for Android NDK as well.
I know for certain that Derelict packages have been used on Windows, Linux, OS X, FreeBSD, and Android. I'm unsure about iOS. But I'm fairly confident that enums are int there just like they are everywhere else.
Sep 17
prev sibling parent Moritz Maxeiner <moritz ucworks.org> writes:
On Monday, 18 September 2017 at 02:04:49 UTC, bitwise wrote:
 The following code will run fine on Windows, but crash on iOS 
 due to the misaligned access:
Interesting, does iOS crash such a process intentionally, or is it a side effect?
 char data[8];
 int i = 0xFFFFFFFF;
 int* p = (int*)&data[1];
Isn't this already undefined behaviour (6.3.2.3 p.7 of C11 [1] - present in earlier versions also, IIRC)?
 *p++ = i;
 *p++ = i;
 *p++ = i;
The last of these is also a buffer overflow. [1] http://iso-9899.info/n1570.html
Sep 18