www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - SAOC LLDB D integration: 15th Weekly Update

reply =?UTF-8?B?THXDrXM=?= Ferreira <contact lsferreira.net> writes:
Hi D community!

Sorry for being late. I'm here again, to describe what I've done 
during the
fifteenth week of Symmetry Autumn of Code.



I didn't work on the demangler patches but I touched on some 
other existing
ones, such as implementation of `DW_TAG_immutable_type` on the 
LLVM core which
had some missing pieces and added tests. (See
[here](https://reviews.llvm.org/D113633))

I also added support for other demanglers other than Itanium on 
LLD linker.
This included the freshly added D demangler along with Rust and 
other future
demanglers added to LLVM core.

So now instead of:

```
app.d:16: error: undefined reference to '_D3app7noexistFZi'
```

You will have this:

```
app.d:16: error: undefined reference to 'app.noexist()'
```

This came along with my work on adding D demangler on the LLVM 
core. You can
read more about this change, 
[here](https://reviews.llvm.org/D116279).



I added D type kind mapping to type name for the rest of the 
built-in types.

I also have found the missing part to make value dumping working. 
I needed to
implement two missing parts:

- A way to discover the bit size based on the D type wrapper type 
kind.
- A way to get the type information based on a type kind using
   `lldb::TypeFlags`

This way LLDB can understand if a certain type kind is built-in, 
has value, is
signed, is integer, is scalar, etc...

So finally, I can print a simple runtime boolean value:

```
(lldb) ta v
Global variables for app.d in app:
(bool) app.falseval = false
(bool) app.trueval = true
```

You can consult the source code for those changes
[here](https://github.com/devtty63/llvm-project/tree/lldb-d/implement-typesystem-d).



Having this implemented, I now need to compare and check if the 
DWARF bit size
and encoding match a certain D type kind. The implementation of 
other types are
not yet pushed, since I faced a problem while adding logic to 
platform-specific
size types, such as `real`.



Since `real` is, according to D specification, platform-specific, 
I need to
accomudate the right bit size according to a certain target and 
discover the
right floating point encoding. This quite a challange because 
DWARF doesn't
specify the floating point encoding. To try to understand why, I 
did a bit of
research about that, and found
[this](https://gcc.gnu.org/legacy-ml/gcc/2015-10/msg00015.html) 
mailing list
thread from 2015 about distiguish different floating point 
encoding in DWARF.

Right now, there is no way and it seems there is no intention to 
distiguish
target-specific floating point formats on DWARF, because 
according to them,
this should be specified on the target ABI. But what if the ABI 
doesn't specify
this behaviour? We should at least have a way to distiguish IEEE 
interchangable
format and non-interchangable formats, like 128-bit x86 SSE 
floating points.

Fortunately, we don't have to worry much about this, since we 
don't use 128-bit
in any of D implementation, although our spec say:

     real: largest floating point size available

     Implementation Defined: The real floating point type has at 
least the range
     and precision of the double type. On x86 CPUs it is often 
implemented as
     the 80 bit Extended Real type supported by the x86 FPU.

This is wrong, because, AFAIK, on x86-64 System V ABI, 128-bit 
floating point
is the largest available, since AMD64 CPUs are required to have 
at least SSE
extensions, which have support for 128-bit XMM registers to 
perform
floating-point operations.

So, LDC and DMD generates binaries with System V as target ABI 
but uses x87 FPU
instead of SSE for `real`, which means they are out of spec?

Anyway, according to Mathias and as I suggested, the simple way 
to do this is
to hardcode this according the target triple and the DWARF type 
name, but I
think this can be problematic for either when we support 128-bit 
floats or when
the ABI doesn't specify the floating point encoding format.

That said, I would like to have some thoughts on this, specially 
if someone
knows if there is any special case for certain targets and how 
DMD/LDC/GDC
interprets the D spec and target ABI spec.



I plan to finish support for built-in type value dumping and 
hopefully start
implementing DIDerivedType which includes DWARF tags for `const` 
type
modifiers, `alias`/`typedef`s,...
Dec 30 2021
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Friday, 31 December 2021 at 03:55:40 UTC, Luís Ferreira wrote:
 Right now, there is no way and it seems there is no intention 
 to distiguish
 target-specific floating point formats on DWARF, because 
 according to them,
 this should be specified on the target ABI. But what if the ABI 
 doesn't specify
 this behaviour? We should at least have a way to distiguish 
 IEEE interchangable
 format and non-interchangable formats, like 128-bit x86 SSE 
 floating points.

 Fortunately, we don't have to worry much about this, since we 
 don't use 128-bit
 in any of D implementation, although our spec say:
We do support native 128-bit floats in D, unless you meant in the compiler implementation, in which case, all native floats (not just real) are banned throughout the compiler.
     real: largest floating point size available

     Implementation Defined: The real floating point type has at 
 least the range
     and precision of the double type. On x86 CPUs it is often 
 implemented as
     the 80 bit Extended Real type supported by the x86 FPU.

 This is wrong, because, AFAIK, on x86-64 System V ABI, 128-bit 
 floating point
 is the largest available, since AMD64 CPUs are required to have 
 at least SSE
 extensions, which have support for 128-bit XMM registers to 
 perform
 floating-point operations.

 So, LDC and DMD generates binaries with System V as target ABI 
 but uses x87 FPU
 instead of SSE for `real`, which means they are out of spec?

 Anyway, according to Mathias and as I suggested, the simple way 
 to do this is
 to hardcode this according the target triple and the DWARF type 
 name, but I
 think this can be problematic for either when we support 
 128-bit floats or when
 the ABI doesn't specify the floating point encoding format.

 That said, I would like to have some thoughts on this, 
 specially if someone
 knows if there is any special case for certain targets and how 
 DMD/LDC/GDC
 interprets the D spec and target ABI spec.
Just have that `real` map to C `long double` and be done with it, even if the hardware may support a bigger float. You don't want to be incompatible with the system you're running on, else you'll be locked out of using the C math library.
Dec 31 2021
parent reply =?ISO-8859-1?Q?Lu=EDs?= Ferreira <contact lsferreira.net> writes:
On Fri, 2021-12-31 at 17:03 +0000, Iain Buclaw via Digitalmars-d wrote:
 We do support native 128-bit floats in D, unless you meant in the=20
 compiler implementation, in which case, all native floats (not=20
 just real) are banned throughout the compiler.
Oh ok, didn't know about that. For now, I hardcoded 64, 80 and 128 bit real type kinds. Later, if we end up finding out that `real` is intended to direct map to `long double` I may use clang::TargetInfo, which gives `long double` bit size according to a specified target triple.
=20
 Just have that `real` map to C `long double` and be done with it,=20
 even if the hardware may support a bigger float. You don't want=20
 to be incompatible with the system you're running on, else you'll=20
 be locked out of using the C math library.
Well, I don't think that directly mapping it is correct. e.g. https://godbolt.org/z/66f6v17Tn . Is this intended? Anyway, I still think we should discuss specification wording about how real is implemented for each target. Maybe worth mention `long double` if direct mapping is intended? System V ABI is specific about `long double` size and it is not the largest supported floating point, as I mentioned above. --=20 Sincerely, Lu=C3=ADs Ferreira lsferreira.net
Jan 04 2022
parent reply Iain Buclaw <ibuclaw gdcproject.org> writes:
On Wednesday, 5 January 2022 at 04:34:31 UTC, Luís Ferreira wrote:
 On Fri, 2021-12-31 at 17:03 +0000, Iain Buclaw via 
 Digitalmars-d wrote:
 We do support native 128-bit floats in D, unless you meant in 
 the compiler implementation, in which case, all native floats 
 (not just real) are banned throughout the compiler.
Oh ok, didn't know about that. For now, I hardcoded 64, 80 and 128 bit real type kinds. Later, if we end up finding out that `real` is intended to direct map to `long double` I may use clang::TargetInfo, which gives `long double` bit size according to a specified target triple.
This is always the case with gdc, so that would be highly recommended.
 Just have that `real` map to C `long double` and be done with 
 it, even if the hardware may support a bigger float. You don't 
 want to be incompatible with the system you're running on, 
 else you'll be locked out of using the C math library.
Well, I don't think that directly mapping it is correct. e.g. https://godbolt.org/z/66f6v17Tn . Is this intended?
Looks like ldc is in the wrong there, real.sizeof should always be 113 on RISC-V. https://explore.dgnu.org/z/9MsGjG
Jan 05 2022
parent =?ISO-8859-1?Q?Lu=EDs?= Ferreira <contact lsferreira.net> writes:
On Wed, 2022-01-05 at 21:52 +0000, Iain Buclaw via Digitalmars-d wrote:
 This is always the case with gdc, so that would be highly=20
 recommended.
=20
 Looks like ldc is in the wrong there, real.sizeof should always=20
 be 113 on RISC-V.
=20
 https://explore.dgnu.org/z/9MsGjG
Right. I'm going to write a patch to fix that then. I'm also going to create a patch on specification to clarify the wording and discuss there. For now I'm going to stick with the hardcoded version I created, just for testing purposes, then update it to clang::TargetInfo to reflect the long double behaviour. --=20 Sincerely, Lu=C3=ADs Ferreira lsferreira.net
Jan 06 2022