www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - [Vote] So, which bit array slicing implementation should we go for?

reply Stewart Gordon <smjg_1998 yahoo.com> writes:
Three separate implementations of bit array slicing have been posted 
here and on digitalmars.D.bugs, as replacements for the current, 
completely non-functional 'implementation' that's worse than having no 
implementation at all.  And it definitely seems that Walter still needs 
help in deciding which one to hook up.

The three implementations tackle the problem in different ways.  They are:

1. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/3782
Restrict slicing to slices beginning on byte boundaries.

Pros:
- ABI consistency with other array types*
- Semantic consistency with other array types (slicing into arrays), in 
the cases that would be supported

Cons:
- Not a general solution
- Adding a restriction to the spec would be (arguably) a step backward

2. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D.bugs/313
Copy slices that aren't byte aligned.  (Actually, it copies all slices 
that don't start at zero, but it's trivial to change otherwise.)

Pros:
- Can arbitrarily slice a bit array
- ABI consistency with other array types*

Con:
- Semantic inconsistency - slices generally copy the array, and don't 
slice into it

3. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D.bugs/495
Include a bit offset in each bit array/pointer reference.

Pros:
- Can arbitrarily slice into a bit array
- Semantic consistency with other array types, both by slicing into 
arrays and by being unrestricted
- Doubles as a means of enabling bit pointers to point to arbitrary 
bits, taking semantic consistency even further

Cons:
- Takes up slightly more memory
- ABI wouldn't match other array types*

*Debatable.  My opinion is that, considering that bits are naturally a 
special type, with characteristics not shared by any other type (atomic, 
derived or compound), we could get away with defining a special ABI for 
bit arrays and pointers.


Of course, these might not be all the pros and cons.

Here are the votes so far:

Approach 1:
Arcane Jill

Approach 2:

Approach 3:
Stewart Gordon

Please cast your votes!

Stewart.

-- 
My e-mail is valid but not my primary mailbox, aside from its being the 
unfortunate victim of intensive mail-bombing at the moment.  Please keep 
replies on the 'group where everyone may benefit.
Jul 26 2004
next sibling parent Arcane Jill <Arcane_member pathlink.com> writes:
In article <ce2sm1$t3d$1 digitaldaemon.com>, Stewart Gordon says...

Here are the votes so far:

Approach 1:
Arcane Jill
Did I vote for this? Hmmm. I don't remember doing that. I remember posting a workaround which operated along the lines of Approach 1, but that doesn't mean it's my favorite. Anyway, even if I did vote for (1) once, I'm allowed to changed my mind. Gotta be (3), obviously. Complete consistency with all other types in all respects. The ABI difference only means that bit* can't be cast to byte* and back again without losing information. But so what? Anyone who's used to writing typesafe C++ code will know that, in general, casting A* to B* and back to A* is rarely guaranteed to be safe anyway. Arcane Jill
Jul 26 2004
prev sibling parent Regan Heath <regan netwin.co.nz> writes:
On Mon, 26 Jul 2004 13:16:00 +0100, Stewart Gordon <smjg_1998 yahoo.com> 
wrote:
 Three separate implementations of bit array slicing have been posted 
 here and on digitalmars.D.bugs, as replacements for the current, 
 completely non-functional 'implementation' that's worse than having no 
 implementation at all.  And it definitely seems that Walter still needs 
 help in deciding which one to hook up.

 The three implementations tackle the problem in different ways.  They 
 are:

 1. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D/3782
 Restrict slicing to slices beginning on byte boundaries.

 Pros:
 - ABI consistency with other array types*
 - Semantic consistency with other array types (slicing into arrays), in 
 the cases that would be supported

 Cons:
 - Not a general solution
 - Adding a restriction to the spec would be (arguably) a step backward

 2. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D.bugs/313
 Copy slices that aren't byte aligned.  (Actually, it copies all slices 
 that don't start at zero, but it's trivial to change otherwise.)

 Pros:
 - Can arbitrarily slice a bit array
 - ABI consistency with other array types*

 Con:
 - Semantic inconsistency - slices generally copy the array, and don't 
 slice into it

 3. http://www.digitalmars.com/drn-bin/wwwnews?digitalmars.D.bugs/495
 Include a bit offset in each bit array/pointer reference.

 Pros:
 - Can arbitrarily slice into a bit array
 - Semantic consistency with other array types, both by slicing into 
 arrays and by being unrestricted
 - Doubles as a means of enabling bit pointers to point to arbitrary 
 bits, taking semantic consistency even further

 Cons:
 - Takes up slightly more memory
 - ABI wouldn't match other array types*

 *Debatable.  My opinion is that, considering that bits are naturally a 
 special type, with characteristics not shared by any other type (atomic, 
 derived or compound), we could get away with defining a special ABI for 
 bit arrays and pointers.


 Of course, these might not be all the pros and cons.

 Here are the votes so far:

 Approach 1:
 Arcane Jill

 Approach 2:

 Approach 3:
 Stewart Gordon
Regan Heath I believe semantic consistency is a primary concern. IMO if you need this behaviour (arbitrary slice without copying) you will not mind paying the memory cost. As for the ABI difference I agree with Arcane Jill. Regan. -- Using M2, Opera's revolutionary e-mail client: http://www.opera.com/m2/
Jul 26 2004