www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Initial feedback for std.experimental.image

reply "Rikki Cattermole" <alphaglosined gmail.com> writes:
So as part of my testing of std.experimental.color I began 
writing an image ala ae.graphics style.
It's now ready for very initial feedback.
There is not many image manipulation functions or storage types. 
Let alone image loading/exporting. In fact there is no image file 
format actually working at this point.
Only the shell for a PNG one is so far written.

I believe it is ready for initial feedback because I feel it is 
moving towards "controversial" territory in its design. With the 
file format support.
I just need to know that I am going in the right direction as of 
now. There is no point in continuing the image reading/writing 
support like it is, if nobody likes it.

Docs: http://cattermole.co.nz/phobosImage/docs/html/
Source: http://cattermole.co.nz/phobosImage/

If reviewing the code itself is too much of a hassel, please 
review the specification document. 
http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html

Like ae.graphics it supports ranges for manipulating of image 
storage types.
However it does also use std.experimental.allocator.

Please destroy!


Some thoughts of mine:
- std.zlib will need to support allocators
- Can std.experimental.allocator be  nogc pleaseeeee?
Jul 06 2015
next sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 If reviewing the code itself is too much of a hassel, please 
 review the specification document. 
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html
 Use the Cartesian coordinates system using two (X and Y) size_t 
 integers
Why not just int? There is preciously little addressable images beyond 2^31. size_t has a variable size and is a source of build breaking. Also, please no unsigned integers! This create all manners of bugs because of how integer promotion work.
 A major concern for D is templated code. Template bloat as it 
 is known is when there is large amounts of templates being 
 initiated with vast array of compile time arguments. For this 
 very reason, template usage must be constrained to the bare 
 minimum. Keeping in line with other D code of highly reusable 
 code, templates will be used. Be constrained to colors (usage 
 not definition) and constructors as much as possible.
I don't think there is so much problems with Phobos levels of meta-programming. There are also way to reduce template instantiation with type-punning. To me an image library should be based on a static interface with optionally a wrapper to auto-generate the dynamic interface much like in std.allocator. Is it the case? Your library seem to be based around the dynamic 'interface' instead.
Jul 06 2015
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 3:34 a.m., ponce wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 If reviewing the code itself is too much of a hassel, please review
 the specification document.
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html
 Use the Cartesian coordinates system using two (X and Y) size_t integers
Why not just int? There is preciously little addressable images beyond 2^31. size_t has a variable size and is a source of build breaking. Also, please no unsigned integers! This create all manners of bugs because of how integer promotion work.
I've been sold on the unsigned vs signed type issue for and only for the x and y coordinates. Although I'm a little concerned about not using ptrdiff_t instead of a specific integral type. For the time being, I'll alias it and use that instead. This will need to be decided later on. Offsets will still be size_t. As it would be too limiting.
 A major concern for D is templated code. Template bloat as it is known
 is when there is large amounts of templates being initiated with vast
 array of compile time arguments. For this very reason, template usage
 must be constrained to the bare minimum. Keeping in line with other D
 code of highly reusable code, templates will be used. Be constrained
 to colors (usage not definition) and constructors as much as possible.
I don't think there is so much problems with Phobos levels of meta-programming. There are also way to reduce template instantiation with type-punning. To me an image library should be based on a static interface with optionally a wrapper to auto-generate the dynamic interface much like in std.allocator. Is it the case? Your library seem to be based around the dynamic 'interface' instead.
I choose structs primarily for my types because it can be optimized away and also allocated on stack instead of the heap. If people prefer classes they can and are more than welcome to use them. There are interfaces for just this purpose identical to ranges. Only like ranges there is no point on just relying on it. Instead I have SwappableImage struct. Which allows with no allocation to wrap up any image storage type you wish via the constructor. I am very proud of this.
Jul 06 2015
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tuesday, 7 July 2015 at 03:39:00 UTC, Rikki Cattermole wrote:
 I've been sold on the unsigned vs signed type issue for and 
 only for the x and y coordinates.
The first version of ae.utils.graphics had unsigned coordinates. As I found out, this was a mistake. A rule of thumb I discovered is that any numbers you may want to subtract, you should use signed types. Otherwise, operations such as drawing an arc with its center point off-canvas (with negative coordinates) becomes unreasonably complicated.
Jul 09 2015
next sibling parent reply "rikki cattermole" <alphaglosined gmail.com> writes:
On Thursday, 9 July 2015 at 23:35:02 UTC, Vladimir Panteleev 
wrote:
 On Tuesday, 7 July 2015 at 03:39:00 UTC, Rikki Cattermole wrote:
 I've been sold on the unsigned vs signed type issue for and 
 only for the x and y coordinates.
The first version of ae.utils.graphics had unsigned coordinates. As I found out, this was a mistake. A rule of thumb I discovered is that any numbers you may want to subtract, you should use signed types. Otherwise, operations such as drawing an arc with its center point off-canvas (with negative coordinates) becomes unreasonably complicated.
Canvas API != image library. I'm quite happy for the canvas API to use signed integers. While the image library does not. After all the canvas API would just wrap how its being stored. You lose the ability to have such large images as the CPU architecture can support but meh. Not my problem. The canvas API being built on top should sanitise coordinates as need be. This is not unreasonable. Little unknown fact is Devisualization.Util features a type called LineGraph which basically gets points based upon primitives such as lines and arcs. With support for deltas. https://github.com/Devisualization/util/blob/master/source/core/devisualization/util/core/linegraph.d Of course it was buggy but, I ran into the same issue you did. My view is definitely canvas should be signed. Just the underlying image storage type doesn't need to use it.
Jul 09 2015
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Friday, 10 July 2015 at 04:56:53 UTC, rikki cattermole wrote:
 On Thursday, 9 July 2015 at 23:35:02 UTC, Vladimir Panteleev 
 wrote:
 On Tuesday, 7 July 2015 at 03:39:00 UTC, Rikki Cattermole 
 wrote:
 I've been sold on the unsigned vs signed type issue for and 
 only for the x and y coordinates.
The first version of ae.utils.graphics had unsigned coordinates. As I found out, this was a mistake. A rule of thumb I discovered is that any numbers you may want to subtract, you should use signed types. Otherwise, operations such as drawing an arc with its center point off-canvas (with negative coordinates) becomes unreasonably complicated.
Canvas API != image library. I'm quite happy for the canvas API to use signed integers. While the image library does not. After all the canvas API would just wrap how its being stored.
Why must there be a distinction between canvas and image? Seems like a pointless abstraction layer.
 You lose the ability to have such large images as the CPU 
 architecture can support but meh. Not my problem.
No, that's wrong. The limit (for 32 bits) is 2 billion pixels for signed coordinates ON ONE AXIS. So the only situation in which an unsigned width/height will be advantageous is a 1-pixel-wide/tall image with a 1-byte-per-pixel-or-less color type.
 The canvas API being built on top should sanitise coordinates 
 as need be. This is not unreasonable.
I think it is. If you want to draw something 10 pixels away from the right border, the expression img.width-10 will be unsigned.
Jul 09 2015
parent reply "rikki cattermole" <alphaglosined gmail.com> writes:
On Friday, 10 July 2015 at 05:01:57 UTC, Vladimir Panteleev wrote:
 On Friday, 10 July 2015 at 04:56:53 UTC, rikki cattermole wrote:
 On Thursday, 9 July 2015 at 23:35:02 UTC, Vladimir Panteleev 
 wrote:
 On Tuesday, 7 July 2015 at 03:39:00 UTC, Rikki Cattermole 
 wrote:
 I've been sold on the unsigned vs signed type issue for and 
 only for the x and y coordinates.
The first version of ae.utils.graphics had unsigned coordinates. As I found out, this was a mistake. A rule of thumb I discovered is that any numbers you may want to subtract, you should use signed types. Otherwise, operations such as drawing an arc with its center point off-canvas (with negative coordinates) becomes unreasonably complicated.
Canvas API != image library. I'm quite happy for the canvas API to use signed integers. While the image library does not. After all the canvas API would just wrap how its being stored.
Why must there be a distinction between canvas and image? Seems like a pointless abstraction layer.
A canvas API may only be in name only. In other words, instead of it taking size_t it would take ptrdiff_t in e.g. free-functions.
 You lose the ability to have such large images as the CPU 
 architecture can support but meh. Not my problem.
No, that's wrong. The limit (for 32 bits) is 2 billion pixels for signed coordinates ON ONE AXIS. So the only situation in which an unsigned width/height will be advantageous is a 1-pixel-wide/tall image with a 1-byte-per-pixel-or-less color type.
Okay okay, I'm getting picky I know its silly.
 The canvas API being built on top should sanitise coordinates 
 as need be. This is not unreasonable.
I think it is. If you want to draw something 10 pixels away from the right border, the expression img.width-10 will be unsigned.
Okay you have a point here with img.width. Humm, complicated problem here. To the average developer having to wrap up an image type with e.g. CanvasImage before using it, even if it was a struct would just seem useless. Just to change index types to be signed. What we need is a unsigned integer type of word size that can have an out of bounds value aka negative. While also supporting calculations that is both signed and unsigned. So basically a unsigned integer behaving like a signed one. Now wouldn't that solve the problem we have?
Jul 09 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 10 July 2015 at 05:27:21 UTC, rikki cattermole wrote:
 What we need is a unsigned integer type of word size that can 
 have an out of bounds value aka negative. While also supporting 
 calculations that is both signed and unsigned.
 So basically a unsigned integer behaving like a signed one.
 Now wouldn't that solve the problem we have?
Yeah we have those, they are called signed integers... But for real, you are way over complicating the problem, just use an int.
Jul 10 2015
prev sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Friday, 10 July 2015 at 05:27:21 UTC, rikki cattermole wrote:
 What we need is a unsigned integer type **of word size**
Sorry for being picky there. At least in x86, machine word size aka 64-bit integers in 64-bit are generally not faster. - first, all operations are operating on 32-bit integers by default, unless those with a REX prefix. The important part is that _those instructions with a REX prefix take more bytes_. This has 2 effects, one on the size of the code itself (expect more icache usage), and one on the instruction decoding queue (processor can't process out of order too much long instruction, I believe the limit is a 16-byte lookahead or something). - secondly to avoid false dependencies, the upper 32-bit of registers are cleared. mov ECX, 0; // will clear the whole RCX register. So while the code size may not be a big deal in the general scheme, it has literally no downsides to use as much 32-bit operations as possible even in 64-bit modes on x86. I don't know with ARM or others.
Jul 21 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 22/07/2015 4:01 a.m., ponce wrote:
 On Friday, 10 July 2015 at 05:27:21 UTC, rikki cattermole wrote:
 What we need is a unsigned integer type **of word size**
Sorry for being picky there. At least in x86, machine word size aka 64-bit integers in 64-bit are generally not faster. - first, all operations are operating on 32-bit integers by default, unless those with a REX prefix. The important part is that _those instructions with a REX prefix take more bytes_. This has 2 effects, one on the size of the code itself (expect more icache usage), and one on the instruction decoding queue (processor can't process out of order too much long instruction, I believe the limit is a 16-byte lookahead or something). - secondly to avoid false dependencies, the upper 32-bit of registers are cleared. mov ECX, 0; // will clear the whole RCX register. So while the code size may not be a big deal in the general scheme, it has literally no downsides to use as much 32-bit operations as possible even in 64-bit modes on x86. I don't know with ARM or others.
I would rather picky begins now, not later. Having these sort of problems sorted out now, means when its time for review, you can't say I haven't argued for the design. So in your view, I should ask the compiler devs of e.g. ldc/gdc of what would be the better way to go for production code in this instance?
Jul 21 2015
parent "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 21 July 2015 at 16:27:33 UTC, Rikki Cattermole wrote:
 So in your view, I should ask the compiler devs of e.g. ldc/gdc 
 of what would be the better way to go for production code in 
 this instance?
Why not, this wasn't about std.experimental.image specifically. I just wanted to point out something that is counter-intuitive (of course this need to be cross-checked). We discussed signed vs unsigned and I think Vladimir made compelling arguments about signed integers. 32-bit vs 64-bit vs machine-size words is another to choose and I'd better complain now than at vote time :).
Jul 21 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Thu, 09 Jul 2015 23:35:00 +0000, Vladimir Panteleev wrote:

 On Tuesday, 7 July 2015 at 03:39:00 UTC, Rikki Cattermole wrote:
 I've been sold on the unsigned vs signed type issue for and only for
 the x and y coordinates.
=20 The first version of ae.utils.graphics had unsigned coordinates. As I found out, this was a mistake. =20 A rule of thumb I discovered is that any numbers you may want to subtract, you should use signed types. Otherwise, operations such as drawing an arc with its center point off-canvas (with negative coordinates) becomes unreasonably complicated.
giving that `int` and `uint` are freely interchangeable... `uint` is=20 better, as it allows only one bound check in `if`, and without casts. ;-)=
Jul 10 2015
parent reply "Vladimir Panteleev" <thecybershadow.lists gmail.com> writes:
On Friday, 10 July 2015 at 23:21:02 UTC, ketmar wrote:
 On Thu, 09 Jul 2015 23:35:00 +0000, Vladimir Panteleev wrote:

 On Tuesday, 7 July 2015 at 03:39:00 UTC, Rikki Cattermole 
 wrote:
 I've been sold on the unsigned vs signed type issue for and 
 only for the x and y coordinates.
The first version of ae.utils.graphics had unsigned coordinates. As I found out, this was a mistake. A rule of thumb I discovered is that any numbers you may want to subtract, you should use signed types. Otherwise, operations such as drawing an arc with its center point off-canvas (with negative coordinates) becomes unreasonably complicated.
giving that `int` and `uint` are freely interchangeable... `uint` is better, as it allows only one bound check in `if`, and without casts. ;-)
That's a poor reason. Optimizing x>=0 && x<100 to an unsigned check is such a basic optimization, even DMD can do it.
Jul 10 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 11 Jul 2015 01:51:41 +0000, Vladimir Panteleev wrote:

 giving that `int` and `uint` are freely interchangeable... `uint` is
 better, as it allows only one bound check in `if`, and without casts.
 ;-)
=20 That's a poor reason. Optimizing x>=3D0 && x<100 to an unsigned check is such a basic optimization, even DMD can do it.
but `if (x < 100)` is less typing than `if (x >=3D 0 && x < 100)`!=
Jul 10 2015
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 7 July 2015 at 01:34, ponce via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 If reviewing the code itself is too much of a hassel, please review the
 specification document.
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html
 Use the Cartesian coordinates system using two (X and Y) size_t integers
Why not just int? There is preciously little addressable images beyond 2^31. size_t has a variable size and is a source of build breaking. Also, please no unsigned integers! This create all manners of bugs because of how integer promotion work.
size_t doesn't quietly cast to int. Most iterators are already size_t. Indexes shoulds be size_t for conventions sake, otherwise you case everywhere. Indices should be unsigned, otherwise you need to do 2 tests for validity, rather than just one; (x >= 0 && x < length) rather than just (x < length).
Jul 07 2015
next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 8:50 p.m., Manu via Digitalmars-d wrote:
 On 7 July 2015 at 01:34, ponce via Digitalmars-d
 <digitalmars-d puremagic.com> wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 If reviewing the code itself is too much of a hassel, please review the
 specification document.
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html
 Use the Cartesian coordinates system using two (X and Y) size_t integers
Why not just int? There is preciously little addressable images beyond 2^31. size_t has a variable size and is a source of build breaking. Also, please no unsigned integers! This create all manners of bugs because of how integer promotion work.
size_t doesn't quietly cast to int. Most iterators are already size_t. Indexes shoulds be size_t for conventions sake, otherwise you case everywhere. Indices should be unsigned, otherwise you need to do 2 tests for validity, rather than just one; (x >= 0 && x < length) rather than just (x < length).
Well you both have very valid points. In this case, lets go with: It's already written, let's not change it just because it might solve a few bugs while potentially adding others.
Jul 07 2015
prev sibling parent reply "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 7 July 2015 at 08:50:45 UTC, Manu wrote:

 Most iterators are already size_t.
.length should have been int but it's too late. It's not because it's a "size" that it should be size_t.
 Indexes shoulds be size_t for conventions sake, otherwise you 
 cast everywhere.
Disagree. I think it's the other way around. And image coordinates are not especially indices.
 Indices should be unsigned, otherwise you need to do 2 tests for
 validity, rather than just one; (x >= 0 && x < length) rather 
 than
 just (x < length).
You can force an unsigned comparison with (cast(uint)x < length). Signed indices optimize better in loops because signed overflow is undefined behaviour. http://stackoverflow.com/questions/18795453/why-prefer-signed-over-unsigned-in-c
Jul 07 2015
next sibling parent reply "David Nadlinger" <code klickverbot.at> writes:
On Tuesday, 7 July 2015 at 12:10:10 UTC, ponce wrote:
 Signed indices optimize better in loops because signed overflow 
 is undefined behaviour.

 http://stackoverflow.com/questions/18795453/why-prefer-signed-over-unsigned-in-c
Only in C/C++. In D, they are defined to overflow according to two's complement. — David
Jul 07 2015
parent "ponce" <contact gam3sfrommars.fr> writes:
On Tuesday, 7 July 2015 at 14:02:53 UTC, David Nadlinger wrote:
 Only in C/C++. In D, they are defined to overflow according to 
 two's complement.

  — David
Thanks for the correction.
Jul 07 2015
prev sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 07 Jul 2015 12:10:07 +0000, ponce wrote:

 Signed indices optimize better in loops because signed overflow is
 undefined behaviour.
one of the reasons i dropped C. luckily, in D it's defined.=
Jul 07 2015
prev sibling next sibling parent reply "Meta" <jared771 gmail.com> writes:
On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 I believe it is ready for initial feedback because I feel it is 
 moving towards "controversial" territory in its design. With 
 the file format support.
 I just need to know that I am going in the right direction as 
 of now. There is no point in continuing the image 
 reading/writing support like it is, if nobody likes it.
For ImageStorage, why provide both pixelAt/pixelStoreAt (bad naming IMO) and opIndex/opIndexAssign?
Jul 06 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 4:55 a.m., Meta wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 I believe it is ready for initial feedback because I feel it is moving
 towards "controversial" territory in its design. With the file format
 support.
 I just need to know that I am going in the right direction as of now.
 There is no point in continuing the image reading/writing support like
 it is, if nobody likes it.
For ImageStorage, why provide both pixelAt/pixelStoreAt (bad naming IMO) and opIndex/opIndexAssign?
So it can be used here: Reasons why this could have been a problem: vs In other words, overloads. Operator overloads here are just syntax candy which is required for everyday use. But the functions are required so they can be used as delegate pointers in other types. If you want another name, please suggest them! After all, I only came up with it at like 3am.
Jul 06 2015
next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tuesday, 7 July 2015 at 03:41:59 UTC, Rikki Cattermole wrote:
 On 7/07/2015 4:55 a.m., Meta wrote:
 For ImageStorage, why provide both pixelAt/pixelStoreAt (bad 
 naming IMO)
If you want another name, please suggest them! After all, I only came up with it at like 3am.
getPixel/putPixel or a variation of such? This is the most common name for such functions.
Jul 09 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 10/07/2015 11:41 a.m., Vladimir Panteleev wrote:
 On Tuesday, 7 July 2015 at 03:41:59 UTC, Rikki Cattermole wrote:
 On 7/07/2015 4:55 a.m., Meta wrote:
 For ImageStorage, why provide both pixelAt/pixelStoreAt (bad naming IMO)
If you want another name, please suggest them! After all, I only came up with it at like 3am.
getPixel/putPixel or a variation of such? This is the most common name for such functions.
Fine by me.
Jul 09 2015
parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Friday, 10 July 2015 at 03:00:57 UTC, Rikki Cattermole wrote:
 getPixel/putPixel or a variation of such? This is the most 
 common name
 for such functions.
Fine by me.
This is honestly just nitpicking, but I see setPixel more than putPixel I think.
Jul 09 2015
next sibling parent "rikki cattermole" <alphaglosined gmail.com> writes:
On Friday, 10 July 2015 at 03:59:32 UTC, Tofu Ninja wrote:
 On Friday, 10 July 2015 at 03:00:57 UTC, Rikki Cattermole wrote:
 getPixel/putPixel or a variation of such? This is the most 
 common name
 for such functions.
Fine by me.
This is honestly just nitpicking, but I see setPixel more than putPixel I think.
I'm fine with it. I'd rather just get it over and done with now and not let this happen later in review period.
Jul 09 2015
prev sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Fri, 10 Jul 2015 03:59:29 +0000, Tofu Ninja wrote:

 On Friday, 10 July 2015 at 03:00:57 UTC, Rikki Cattermole wrote:
 getPixel/putPixel or a variation of such? This is the most common name
 for such functions.
Fine by me.
=20 This is honestly just nitpicking, but I see setPixel more than putPixel I think.
i'm using `setPixel` to change pixel unconditionally (think of it as "set=20 rgb"), and `putPixel` to blend pixel (think of it as "do rgba color mix").=
Jul 10 2015
parent reply "Meta" <jared771 gmail.com> writes:
On Friday, 10 July 2015 at 23:23:32 UTC, ketmar wrote:
 On Fri, 10 Jul 2015 03:59:29 +0000, Tofu Ninja wrote:

 On Friday, 10 July 2015 at 03:00:57 UTC, Rikki Cattermole 
 wrote:
 getPixel/putPixel or a variation of such? This is the most 
 common name for such functions.
Fine by me.
This is honestly just nitpicking, but I see setPixel more than putPixel I think.
i'm using `setPixel` to change pixel unconditionally (think of it as "set rgb"), and `putPixel` to blend pixel (think of it as "do rgba color mix").
Wouldn't that better be called blendPixel?
Jul 10 2015
next sibling parent "Tofu Ninja" <emmons0 purdue.edu> writes:
On Saturday, 11 July 2015 at 00:49:27 UTC, Meta wrote:
 Wouldn't that better be called blendPixel?
Also wouldn't a blendPixel need to know the blending scheme, alpha, premultiplied, additive, ect...
Jul 10 2015
prev sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Sat, 11 Jul 2015 00:49:25 +0000, Meta wrote:

 i'm using `setPixel` to change pixel unconditionally (think of it as
 "set rgb"), and `putPixel` to blend pixel (think of it as "do rgba
 color mix").
=20 Wouldn't that better be called blendPixel?
as Tofu said, `blendPixel` should then be parameterized with blending=20 mode. and for `putPixel` i know blending mode, as it is fixed. anyway, that was only a suggestion and (poor) reasoning for it, i'm not=20 insisting on that scheme.=
Jul 10 2015
prev sibling parent reply "Suliman" <evermind live.ru> writes:
Hi! Is there any progress?
Sep 03 2015
next sibling parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 03/09/15 10:28 PM, Suliman wrote:
 Hi! Is there any progress?
Indeed! Quite a bit really. https://github.com/rikkimax/alphaPhobos/tree/master/source/std/experimental/graphic/image PNG should now be import/exportable. I have not ehh tested this just yet however. Building a test framework for it will be a rather major task and could take ~9 hours at least. I believe I'll be working on skewing today on stream, in about an hour. https://www.livecoding.tv/alphaglosined/ My focus isn't too much on the image library. More so on all the other code in that repository. My time line still has about a year on it till I'm feeling ready to start the PR process after all :)
Sep 03 2015
prev sibling parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
On Thursday, 3 September 2015 at 10:28:50 UTC, Suliman wrote:
 Hi! Is there any progress?
Update, during todays stream. Instead of adding scew manipulation instead adding range clever support functions (createImageFrom and assignTo). ImageImpl image1; // input ImageImpl[2] image2; // buffers write("myfile.png", image1.rangeOf .map!(p => { p.y += p.x * addY; p.height = addY * p.height; }) .createImageFrom!ImageImpl(image2[0], theAllocator()) .assignTo(image2[0], RGB8(255, 255, 255)) .rangeOf .flipHorizontalRange .rotate90Range .flipVerticalRange .createImageFrom!ImageImpl(image2[1], theAllocator()) .copyInto(image2[1]) .asPNG.toBytes() ); theAllocator().dispose(image[0]); theAllocator().dispose(image[1]);
Sep 03 2015
prev sibling next sibling parent reply "Tofu Ninja" <emmons0 purdue.edu> writes:
On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 So as part of my testing of std.experimental.color I began 
 writing an image ala ae.graphics style.
 It's now ready for very initial feedback.
 There is not many image manipulation functions or storage 
 types. Let alone image loading/exporting. In fact there is no 
 image file format actually working at this point.
 Only the shell for a PNG one is so far written.

 I believe it is ready for initial feedback because I feel it is 
 moving towards "controversial" territory in its design. With 
 the file format support.
 I just need to know that I am going in the right direction as 
 of now. There is no point in continuing the image 
 reading/writing support like it is, if nobody likes it.

 Docs: http://cattermole.co.nz/phobosImage/docs/html/
 Source: http://cattermole.co.nz/phobosImage/

 If reviewing the code itself is too much of a hassel, please 
 review the specification document. 
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html

 Like ae.graphics it supports ranges for manipulating of image 
 storage types.
 However it does also use std.experimental.allocator.

 Please destroy!


 Some thoughts of mine:
 - std.zlib will need to support allocators
 - Can std.experimental.allocator be  nogc pleaseeeee?
For my uses, the only thing that I really care about is file saving and loading. Personally I would prefer if the lib depended on something that already works well for image loading and storing like freeImage. It has way more file types that have been proven to work than I would reasonably expect would be implemented if they were done in pure D. If people reject the idea of the std lib depending on an outside lib, then I would at least ask that the design be such that an outside lib could be easily used in conjunction with std.image.
Jul 06 2015
next sibling parent reply "Suliman" <evermind live.ru> writes:
If people reject
 the idea of the std lib depending on an outside lib, then I 
 would at least ask that the design be such that an outside lib 
 could be easily used in conjunction with std.image.
Yes it's not good idea, because you need one freeImage, I need gdal image support, but all of us need any standard graphical toolkit at last for simple tasks. So if std.image will help to get it a lot of people would very thanks full.
Jul 06 2015
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 6:04 a.m., Suliman wrote:
 If people reject
 the idea of the std lib depending on an outside lib, then I would at
 least ask that the design be such that an outside lib could be easily
 used in conjunction with std.image.
Yes it's not good idea, because you need one freeImage, I need gdal image support, but all of us need any standard graphical toolkit at last for simple tasks. So if std.image will help to get it a lot of people would very thanks full.
My end goal is at least by the end of next year is to have Devisualization.Window moved into Phobos. From there it is only a matter of time till somebody comes up with a GUI toolkit. After all they have the ability to create windows + context on all three main platforms and can draw anything they like.
Jul 06 2015
prev sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 5:47 a.m., Tofu Ninja wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 So as part of my testing of std.experimental.color I began writing an
 image ala ae.graphics style.
 It's now ready for very initial feedback.
 There is not many image manipulation functions or storage types. Let
 alone image loading/exporting. In fact there is no image file format
 actually working at this point.
 Only the shell for a PNG one is so far written.

 I believe it is ready for initial feedback because I feel it is moving
 towards "controversial" territory in its design. With the file format
 support.
 I just need to know that I am going in the right direction as of now.
 There is no point in continuing the image reading/writing support like
 it is, if nobody likes it.

 Docs: http://cattermole.co.nz/phobosImage/docs/html/
 Source: http://cattermole.co.nz/phobosImage/

 If reviewing the code itself is too much of a hassel, please review
 the specification document.
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html


 Like ae.graphics it supports ranges for manipulating of image storage
 types.
 However it does also use std.experimental.allocator.

 Please destroy!


 Some thoughts of mine:
 - std.zlib will need to support allocators
 - Can std.experimental.allocator be  nogc pleaseeeee?
For my uses, the only thing that I really care about is file saving and loading. Personally I would prefer if the lib depended on something that already works well for image loading and storing like freeImage. It has way more file types that have been proven to work than I would reasonably expect would be implemented if they were done in pure D. If people reject the idea of the std lib depending on an outside lib, then I would at least ask that the design be such that an outside lib could be easily used in conjunction with std.image.
At the current design, as long as you import the implementation and then call the function related to it, you are good to go. I.E. import std.experimental.freeimage; auto myImage = ...; write("filename", myImage.asFreeImage().toBytes("png")); // ubyte[] Or atleast that is the idea of it. Although it definitely would not be done in Phobos. The whole point is to give a common interface for the D community and to enable swapping these images around. Manipulating them as needed.
Jul 06 2015
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Tuesday, 7 July 2015 at 03:46:55 UTC, Rikki Cattermole wrote:
 write("filename", myImage.asFreeImage().toBytes("png")); //
Um, does this line mean that code to support ALL image formats in the library is going to end up in the executable? Selecting which image formats are supported by the application should be an explicit choice done by the application code.
Jul 09 2015
parent "rikki cattermole" <alphaglosined gmail.com> writes:
On Thursday, 9 July 2015 at 23:44:06 UTC, Vladimir Panteleev 
wrote:
 On Tuesday, 7 July 2015 at 03:46:55 UTC, Rikki Cattermole wrote:
 write("filename", myImage.asFreeImage().toBytes("png")); //
Um, does this line mean that code to support ALL image formats in the library is going to end up in the executable? Selecting which image formats are supported by the application should be an explicit choice done by the application code.
I'm not using FreeImage for loading/exporting of images. It was an example for another library to manage it. Instead of purely D one. You will be free to use any you wish as long as you can import it. Currently there is no big bad manager for registering of image types because of M:N problem that is registering templated functions while not knowing the template arguments at compile time.
Jul 09 2015
prev sibling next sibling parent reply "Baz" <bb.temp gmx.com> writes:
On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 So as part of my testing of std.experimental.color I began 
 writing an image ala ae.graphics style.
 It's now ready for very initial feedback.
I don't see some things that immediatly come to my mind as "usefull" when i think to the image class as it's been done elsewhere. - Resampling API ? e.g the old trick to get AA: draw on double sized image canvas and resample to halh using an interpolator. image.loadFromFile(...); image.resize(horzRatio, vertRatio, Resampler.linear); image.saveToFile(...); - Canvas API ? myImage.canvas.pen.color = RGBA!ubyte(...); myImage.canvas.fillRect(...); etc. needed to programmatically generate an image.
Jul 06 2015
parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Mon, 06 Jul 2015 20:16:30 +0000, Baz wrote:

 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 So as part of my testing of std.experimental.color I began writing an
 image ala ae.graphics style.
 It's now ready for very initial feedback.
=20 I don't see some things that immediatly come to my mind as "usefull" when i think to the image class as it's been done elsewhere. =20 - Resampling API ? =20 e.g the old trick to get AA: draw on double sized image canvas and resample to halh using an interpolator. =20 image.loadFromFile(...); image.resize(horzRatio, vertRatio, Resampler.linear); image.saveToFile(...); =20 - Canvas API ? =20 myImage.canvas.pen.color =3D RGBA!ubyte(...); myImage.canvas.fillRect(...); etc. =20 needed to programmatically generate an image.
i believe this is out of scope of the library. that is exactly the things=20 that should be build latter with image library as foundation.=
Jul 06 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 2:28 p.m., ketmar wrote:
 On Mon, 06 Jul 2015 20:16:30 +0000, Baz wrote:

 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 So as part of my testing of std.experimental.color I began writing an
 image ala ae.graphics style.
 It's now ready for very initial feedback.
I don't see some things that immediatly come to my mind as "usefull" when i think to the image class as it's been done elsewhere. - Resampling API ? e.g the old trick to get AA: draw on double sized image canvas and resample to halh using an interpolator. image.loadFromFile(...); image.resize(horzRatio, vertRatio, Resampler.linear); image.saveToFile(...); - Canvas API ? myImage.canvas.pen.color = RGBA!ubyte(...); myImage.canvas.fillRect(...); etc. needed to programmatically generate an image.
i believe this is out of scope of the library. that is exactly the things that should be build latter with image library as foundation.
Resizing the likes suggested are manipulation functions. Not out of scope, I most likely will not be implementing them. My goal is only to do the simpler image manipulation functions as I would have to implement them twice each. One to work with ranges the other to whole sale modify the image to be that way and may allocate in the process. Canvas is definitely and 100% out of scope however. It's a great idea and something we need long term. Just not something I can do right now. Also need to add that to specification document as a follow on work and out of scope.
Jul 06 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 07 Jul 2015 15:51:13 +1200, Rikki Cattermole wrote:

 Canvas is definitely and 100% out of scope however. It's a great idea
 and something we need long term. Just not something I can do right now.
 Also need to add that to specification document as a follow on work and
 out of scope.
so what primitives you want to include? will there be line joints of=20 various types? wide lines (more that 1px wide)? will it allow=20 antialiasing, and if yes, will it be at least of the quality of anti- grain geometry? etc, etc. do you already see why i believe that canvas is out of scope? it's simply=20 too big task that should be done separately, using image lib as=20 foundation.=
Jul 07 2015
prev sibling next sibling parent reply ketmar <ketmar ketmar.no-ip.org> writes:
On Mon, 06 Jul 2015 13:48:51 +0000, Rikki Cattermole wrote:

 Please destroy!
have no real opinion, but for me everything is looking good. i don't=20 really care about "ideal design", only about "usable design". and that=20 seems usable.
 Some thoughts of mine:
 - std.zlib will need to support allocators
std.zlib need to stop being crap. i.e. it should be thrown away and=20 completely rewritten. i did that several times, including pure D inflate=20 and compress/decompress that using byte streams. never used std.zlib as=20 is.=
Jul 06 2015
next sibling parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 7/07/2015 2:33 p.m., ketmar wrote:
 On Mon, 06 Jul 2015 13:48:51 +0000, Rikki Cattermole wrote:

 Please destroy!
have no real opinion, but for me everything is looking good. i don't really care about "ideal design", only about "usable design". and that seems usable.
 Some thoughts of mine:
 - std.zlib will need to support allocators
std.zlib need to stop being crap. i.e. it should be thrown away and completely rewritten. i did that several times, including pure D inflate and compress/decompress that using byte streams. never used std.zlib as is.
In that case, ketmar is now in charge of writing a new std.compression module for Phobos! Since he already knows these algos well. We can sort out who does the PR itself into Phobos. Most likely I'll have to. We can do reviews on the N.G. instead of on Github just so that he can be responsible for it ;) And yes, I'll happily add it as a dependency.
Jul 06 2015
parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 07 Jul 2015 15:54:15 +1200, Rikki Cattermole wrote:

 In that case, ketmar is now in charge of writing a new std.compression
 module for Phobos! Since he already knows these algos well.
i don't think that i'm ready to code for phobos. coding style can be=20 fixed with dfmt (yet it will require to backport patches from other=20 people, grrrr), but no tool can fix lack of documentation and tests. and=20 i really HAET writing those. the same for writing API parts that should=20 be there "for completeness", but has no use for me. so while i can build a draft, some other dedicated person must do all the=20 other work. and if there is such dedicated person, that person can do=20 that without my draft shitcode, for sure. p.s. writing xpath and css selectors engine for Adam's "dom.d" is much=20 funnier, as i can stop caring about "api completeness", "comprehensive=20 documentation" and so on. if my engine will be able to do more that=20 current "dom.d" engine do, Adam will be happy to merge it, i believe. ;-)=
Jul 07 2015
prev sibling parent reply Manu via Digitalmars-d <digitalmars-d puremagic.com> writes:
On 7 July 2015 at 12:33, ketmar via Digitalmars-d
<digitalmars-d puremagic.com> wrote:
 On Mon, 06 Jul 2015 13:48:51 +0000, Rikki Cattermole wrote:

 Please destroy!
have no real opinion, but for me everything is looking good. i don't really care about "ideal design", only about "usable design". and that seems usable.
 Some thoughts of mine:
 - std.zlib will need to support allocators
std.zlib need to stop being crap. i.e. it should be thrown away and completely rewritten. i did that several times, including pure D inflate and compress/decompress that using byte streams. never used std.zlib as is.
What's wrong with zlib? Surely it's possible to produce a d interface for zlib that's nice? Thing is, zlib is possibly the single most popular library in the world, and it gets way more testing, and improvements + security fixes from time to time. Why wouldn't you want to link to the real thing?
Jul 07 2015
next sibling parent "rsw0x" <anonymous anonymous.com> writes:
On Tuesday, 7 July 2015 at 08:54:57 UTC, Manu wrote:
 On 7 July 2015 at 12:33, ketmar via Digitalmars-d 
 <digitalmars-d puremagic.com> wrote:
 [...]
What's wrong with zlib? Surely it's possible to produce a d interface for zlib that's nice? Thing is, zlib is possibly the single most popular library in the world, and it gets way more testing, and improvements + security fixes from time to time. Why wouldn't you want to link to the real thing?
D community suffers from NIH syndrome
Jul 07 2015
prev sibling parent ketmar <ketmar ketmar.no-ip.org> writes:
On Tue, 07 Jul 2015 18:54:45 +1000, Manu via Digitalmars-d wrote:

 std.zlib need to stop being crap. i.e. it should be thrown away and
 completely rewritten. i did that several times, including pure D
 inflate and compress/decompress that using byte streams. never used
 std.zlib as is.
=20 What's wrong with zlib? Surely it's possible to produce a d interface for zlib that's nice? Thing is, zlib is possibly the single most popular library in the world, and it gets way more testing, and improvements + security fixes from time to time. Why wouldn't you want to link to the real thing?
nothing wrong with *zlib*, it's *std.zlib* sux. but not `etc.c.zlib`=20 interface, which i used in some of my stream codecs.=
Jul 07 2015
prev sibling next sibling parent reply "Gregor =?UTF-8?B?TcO8Y2tsIg==?= <gregormueckl gmx.de> writes:
On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Please destroy!
You asked for it! :) As a reference to a library that is used to handle images on a professional level (VFX industry), I'd encourage you to look at the feature set and interfaces of OpenImageIO. Sure, it's a big library and some of it is definitely out of scope for what you try to accomplish (image tile caching and texture sampling, obviously). Yet, there are some features I specifically want to mention here to challenge the scope of your design: - arbitrary channel layouts in images: this is a big one. You mention 3D engines as a targeted use case in the specification. 3D rendering is one of the worst offenders when it comes to crazy channel layouts in textures (which are obviously stored as image files). If you have a data texture that requires 2 channels (e.g. uv offsets for texture lookups in shaders or some crazy data tables), its memory layout should also only ever have two channels. Don't expand it to RGB transparently or anything else braindead. Don't change the data type of the pixel values wildly without being asked to do so. The developer most likely has chosen a 16 bit signed integer per channel (or whatever else) for a good reason. Some high end file formats like OpenEXR even allow users to store completely arbitrary channels as well, often with a different per-channel data format (leading to layouts like RGBAZ with an additional mask channel on top). But support for that really bloats image library interfaces. I'd stick with a sane variant of the uncompressed texture formats that the OpenGL specification lists as the target set of supported in-memory image formats. That mostly matches current GPU hardware support and probably will for some time to come. - padding and memory alignment: depending on the platform, image format and task at hand you may want the in-memory layout of your image to be padded in various ways. For example, you would want your scanlines and pixel values aligned to certain offsets to make use of SIMD instructions which often carry alignment restrictions with them. This is one of the reasons why RGB images are sometimes expanded to have a dummy channel between the triplets. Also, aligning the start of each scanline may be important, which introduces a "pitch" between them that is greater than just the storage size of each scanline by itself. Again, this may help speeding up image processing. - subimages: this one may seem obscure, but it happens in a number common of file formats (gif, mng, DDS, probably TIFF and others). Subimages can be - for instance - individual animation frames or precomputed mipmaps. This means that they may have metadata attached to them (e.g. framerate or delay to next frame) or they may come in totally different dimensions (mipmap levels). - window regions: now this not quite your average image format feature, but relevant for some use cases. The gist of it is that the image file may define a coordinate system for a whole image frame but only contain actual data within certain regions that do not cover the whole frame. These regions may even extend beyond the defined image frame (used e.g. for VFX image postprocessing to have properly defined pixel values to filter into the visible part of the final frame). Again, the OpenEXR documentation explains this feature nicely. Again, I think this likely is out of scope for this library. My first point also leads me to this criticism: - I do not see a way to discover the actual data format of a PNG file through your loader. Is it 8 bit palette-based, 8 bit per pixel or 16 bits per pixel? Especially the latter should not be transparently converted to 8 bits per pixel if encountered because it is a lossy transformation. As I see it right now you have to know the pixel format up front to instantiate the loader. I consider that bad design. You can only have true knowledge of the file contents after the image header were parsed. The same is generally true of most actually useful image formats out there. - Could support for image data alignment be added by defining a new ImageStorage subclass? The actual in-memory data is not exposed to direct access, is it? Access to the raw image data would be preferable for those cases where you know exactly what you are doing. Going through per-pixel access functions for large image regions is going to be dreadfully slow in comparison to what can be achieved with proper processing/filtering code. - Also, uploading textures to the GPU requires passing raw memory blocks and a format description of sorts to the 3D API. Being required to slowly copy the image data in question into a temporary buffer for this process is not an adequate solution. Let me know what you think!
Jul 08 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
<gregormueckl gmx.de>" wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Please destroy!
You asked for it! :) As a reference to a library that is used to handle images on a professional level (VFX industry), I'd encourage you to look at the feature set and interfaces of OpenImageIO. Sure, it's a big library and some of it is definitely out of scope for what you try to accomplish (image tile caching and texture sampling, obviously). Yet, there are some features I specifically want to mention here to challenge the scope of your design: - arbitrary channel layouts in images: this is a big one. You mention 3D engines as a targeted use case in the specification. 3D rendering is one of the worst offenders when it comes to crazy channel layouts in textures (which are obviously stored as image files). If you have a data texture that requires 2 channels (e.g. uv offsets for texture lookups in shaders or some crazy data tables), its memory layout should also only ever have two channels. Don't expand it to RGB transparently or anything else braindead. Don't change the data type of the pixel values wildly without being asked to do so. The developer most likely has chosen a 16 bit signed integer per channel (or whatever else) for a good reason. Some high end file formats like OpenEXR even allow users to store completely arbitrary channels as well, often with a different per-channel data format (leading to layouts like RGBAZ with an additional mask channel on top). But support for that really bloats image library interfaces. I'd stick with a sane variant of the uncompressed texture formats that the OpenGL specification lists as the target set of supported in-memory image formats. That mostly matches current GPU hardware support and probably will for some time to come.
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
 - padding and memory alignment: depending on the platform, image format
 and task at hand you may want the in-memory layout of your image to be
 padded in various ways. For example, you would want your scanlines and
 pixel values aligned to certain offsets to make use of SIMD instructions
 which often carry alignment restrictions with them. This is one of the
 reasons why RGB images are sometimes expanded to have a dummy channel
 between the triplets. Also, aligning the start of each scanline may be
 important, which introduces a "pitch" between them that is greater than
 just the storage size of each scanline by itself. Again, this may help
 speeding up image processing.
Image storage type implementation, not my problem. They can be added later to support padding ext.
 - subimages: this one may seem obscure, but it happens in a number
 common of file formats (gif, mng, DDS, probably TIFF and others).
 Subimages can be - for instance - individual animation frames or
 precomputed mipmaps. This means that they may have metadata attached to
 them (e.g. framerate or delay to next frame) or they may come in totally
 different dimensions (mipmap levels).
Ahhh, this. I can add it fairly easily. A struct that is a sub image of another. Any extra data would have to be alias this'd.
 - window regions: now this not quite your average image format feature,
 but relevant for some use cases. The gist of it is that the image file
 may define a coordinate system for a whole image frame but only contain
 actual data within certain regions that do not cover the whole frame.
 These regions may even extend beyond the defined image frame (used e.g.
 for VFX image postprocessing to have properly defined pixel values to
 filter into the visible part of the final frame). Again, the OpenEXR
 documentation explains this feature nicely. Again, I think this likely
 is out of scope for this library.
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
 My first point also leads me to this criticism:

 - I do not see a way to discover the actual data format of a PNG file
 through your loader. Is it 8 bit palette-based, 8 bit per pixel or 16
 bits per pixel? Especially the latter should not be transparently
 converted to 8 bits per pixel if encountered because it is a lossy
 transformation. As I see it right now you have to know the pixel format
 up front to instantiate the loader. I consider that bad design. You can
 only have true knowledge of the file contents after the image header
 were parsed. The same is generally true of most actually useful image
 formats out there.
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler. However if you can convince Manu Evans to add some sort of union color type that can hold many different sizes for e.g. RGB. Then I'm all for it. Although I would consider this to be a _bad_ feature.
 - Could support for image data alignment be added by defining a new
 ImageStorage subclass? The actual in-memory data is not exposed to
 direct access, is it? Access to the raw image data would be preferable
 for those cases where you know exactly what you are doing. Going through
 per-pixel access functions for large image regions is going to be
 dreadfully slow in comparison to what can be achieved with proper
 processing/filtering code.
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 - Also, uploading textures to the GPU requires passing raw memory blocks
 and a format description of sorts to the 3D API. Being required to
 slowly copy the image data in question into a temporary buffer for this
 process is not an adequate solution.
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way. Some very good points that I believe definitely needs to be touched upon where had. I've had a read of OpenImageIO documentation and all I can say is irkkk. Most of what is in there with e.g. tiles and reflection styles methods are out of scope out right as they are a bit specialized for my liking. If somebody wants to add it, take a look at the offset support. It was written as an 'extension' like ForwardRange is for ranges. The purpose of this library is to work more for GUI's and games then anything else. It is meant more for the average programmer then specialized in imagery ones. It's kinda why I wrote the specification document. Just so in the future if somebody comes in saying its awful who does know and use these kinds of libraries. Will have to understand that it was out of scope and was done on purpose.
Jul 08 2015
parent reply "Gregor =?UTF-8?B?TcO8Y2tsIg==?= <gregormueckl gmx.de> writes:
On Thursday, 9 July 2015 at 04:09:11 UTC, Rikki Cattermole wrote:
 On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
 <gregormueckl gmx.de>" wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Please destroy!
You asked for it! :) As a reference to a library that is used to handle images on a professional level (VFX industry), I'd encourage you to look at the feature set and interfaces of OpenImageIO. Sure, it's a big library and some of it is definitely out of scope for what you try to accomplish (image tile caching and texture sampling, obviously). Yet, there are some features I specifically want to mention here to challenge the scope of your design: - arbitrary channel layouts in images: this is a big one. You mention 3D engines as a targeted use case in the specification. 3D rendering is one of the worst offenders when it comes to crazy channel layouts in textures (which are obviously stored as image files). If you have a data texture that requires 2 channels (e.g. uv offsets for texture lookups in shaders or some crazy data tables), its memory layout should also only ever have two channels. Don't expand it to RGB transparently or anything else braindead. Don't change the data type of the pixel values wildly without being asked to do so. The developer most likely has chosen a 16 bit signed integer per channel (or whatever else) for a good reason. Some high end file formats like OpenEXR even allow users to store completely arbitrary channels as well, often with a different per-channel data format (leading to layouts like RGBAZ with an additional mask channel on top). But support for that really bloats image library interfaces. I'd stick with a sane variant of the uncompressed texture formats that the OpenGL specification lists as the target set of supported in-memory image formats. That mostly matches current GPU hardware support and probably will for some time to come.
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
Hm... in that case you introduce transparent mappings between user-facing types and the internal mapping which may be lossy in various ways. This works, but the internal type should be discoverable somehow. This leads down a similar road to OpenGL texture formats: they have internal storage formats and there's the host formats to/from which the data is converted when passing back and forth. This adds a lot of complexity and potential for surprises, unfortunately. I'm not entirely sure what to think here.
 - window regions: now this not quite your average image format 
 feature,
 but relevant for some use cases. The gist of it is that the 
 image file
 may define a coordinate system for a whole image frame but 
 only contain
 actual data within certain regions that do not cover the whole 
 frame.
 These regions may even extend beyond the defined image frame 
 (used e.g.
 for VFX image postprocessing to have properly defined pixel 
 values to
 filter into the visible part of the final frame). Again, the 
 OpenEXR
 documentation explains this feature nicely. Again, I think 
 this likely
 is out of scope for this library.
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
Yes, this is a slightly special use case. I can think of quite a lot of cases where you would want border regions of some kind for what you are doing, but they are all related to rendering and image processing.
 My first point also leads me to this criticism:

 - I do not see a way to discover the actual data format of a 
 PNG file
 through your loader. Is it 8 bit palette-based, 8 bit per 
 pixel or 16
 bits per pixel? Especially the latter should not be 
 transparently
 converted to 8 bits per pixel if encountered because it is a 
 lossy
 transformation. As I see it right now you have to know the 
 pixel format
 up front to instantiate the loader. I consider that bad 
 design. You can
 only have true knowledge of the file contents after the image 
 header
 were parsed. The same is generally true of most actually 
 useful image
 formats out there.
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler.
I can understand your reasoning and this is why libraries like FreeImage make it very simple to get the image data converted to the format you want from an arbitrary input. What I'd like to see is more of an extension of the current mechanism: make it possible to query the data format of the image file. That way, the application can make a wiser decision on the format in which it wants to receive the data, but it always is able to get the data in a format it understands. The format description for the file format would have to be quite complex to cover all possibilities, though. The best that I can come up with is a list of tuples of channel names (as strings) and data type (as enums). Processing those isn't fun, though.
 - Could support for image data alignment be added by defining 
 a new
 ImageStorage subclass? The actual in-memory data is not 
 exposed to
 direct access, is it? Access to the raw image data would be 
 preferable
 for those cases where you know exactly what you are doing. 
 Going through
 per-pixel access functions for large image regions is going to 
 be
 dreadfully slow in comparison to what can be achieved with 
 proper
 processing/filtering code.
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 - Also, uploading textures to the GPU requires passing raw 
 memory blocks
 and a format description of sorts to the 3D API. Being 
 required to
 slowly copy the image data in question into a temporary buffer 
 for this
 process is not an adequate solution.
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way.
This is where the abstraction of ImageStorage with several possible implementations becomes iffy. The user is at the loader's mercy to hopefully hand over the right implementation type. I'm not sure I like that idea. This seems inconsistent with making the pixel format the user's choice. Why should the user have choice over one thing and not the other?
 Some very good points that I believe definitely needs to be 
 touched upon where had.

 I've had a read of OpenImageIO documentation and all I can say 
 is irkkk.
 Most of what is in there with e.g. tiles and reflection styles 
 methods are out of scope out right as they are a bit 
 specialized for my liking. If somebody wants to add it, take a 
 look at the offset support. It was written as an 'extension' 
 like ForwardRange is for ranges.
I mentioned OpenImageIO as this library is full-featured and very complete in a lot of areas. It shows what it takes to be as flexible as possible regarding the image data that is processed. Take it as a catalog of things to consider, but not as template.
 The purpose of this library is to work more for GUI's and games 
 then anything else. It is meant more for the average programmer 
 then specialized in imagery ones. It's kinda why I wrote the 
 specification document. Just so in the future if somebody comes 
 in saying its awful who does know and use these kinds of 
 libraries. Will have to understand that it was out of scope and 
 was done on purpose.
Having a specification is a good thing and this is why I entered the discussion. Although your specification is still a bit vague in my opinion, the general direction is good. The limitation of the scope looks fine to me and I'm not arguing against that. My point is rather that your design can still be improved to match that scope better.
Jul 09 2015
parent reply Rikki Cattermole <alphaglosined gmail.com> writes:
On 10/07/2015 2:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
<gregormueckl gmx.de>" wrote:
 On Thursday, 9 July 2015 at 04:09:11 UTC, Rikki Cattermole wrote:
 On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?=
 <gregormueckl gmx.de>" wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Please destroy!
You asked for it! :) As a reference to a library that is used to handle images on a professional level (VFX industry), I'd encourage you to look at the feature set and interfaces of OpenImageIO. Sure, it's a big library and some of it is definitely out of scope for what you try to accomplish (image tile caching and texture sampling, obviously). Yet, there are some features I specifically want to mention here to challenge the scope of your design: - arbitrary channel layouts in images: this is a big one. You mention 3D engines as a targeted use case in the specification. 3D rendering is one of the worst offenders when it comes to crazy channel layouts in textures (which are obviously stored as image files). If you have a data texture that requires 2 channels (e.g. uv offsets for texture lookups in shaders or some crazy data tables), its memory layout should also only ever have two channels. Don't expand it to RGB transparently or anything else braindead. Don't change the data type of the pixel values wildly without being asked to do so. The developer most likely has chosen a 16 bit signed integer per channel (or whatever else) for a good reason. Some high end file formats like OpenEXR even allow users to store completely arbitrary channels as well, often with a different per-channel data format (leading to layouts like RGBAZ with an additional mask channel on top). But support for that really bloats image library interfaces. I'd stick with a sane variant of the uncompressed texture formats that the OpenGL specification lists as the target set of supported in-memory image formats. That mostly matches current GPU hardware support and probably will for some time to come.
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
Hm... in that case you introduce transparent mappings between user-facing types and the internal mapping which may be lossy in various ways. This works, but the internal type should be discoverable somehow. This leads down a similar road to OpenGL texture formats: they have internal storage formats and there's the host formats to/from which the data is converted when passing back and forth. This adds a lot of complexity and potential for surprises, unfortunately. I'm not entirely sure what to think here.
Internal color to an image storage type is well known at compile time. Now SwappableImage that wraps another image type. That definitely muddies the water a lot. Since it auto converts from the original format. Which could be, you know messy. It's actually the main reason I asked Manu for a gain/loss precision functions. For detecting if precision is being changed. Mostly for logging purposes.
 - window regions: now this not quite your average image format feature,
 but relevant for some use cases. The gist of it is that the image file
 may define a coordinate system for a whole image frame but only contain
 actual data within certain regions that do not cover the whole frame.
 These regions may even extend beyond the defined image frame (used e.g.
 for VFX image postprocessing to have properly defined pixel values to
 filter into the visible part of the final frame). Again, the OpenEXR
 documentation explains this feature nicely. Again, I think this likely
 is out of scope for this library.
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
Yes, this is a slightly special use case. I can think of quite a lot of cases where you would want border regions of some kind for what you are doing, but they are all related to rendering and image processing.
You have convinced me that I need to add a subimage struct which is basically SwappableImage. Just with offset/size different to original.
 My first point also leads me to this criticism:

 - I do not see a way to discover the actual data format of a PNG file
 through your loader. Is it 8 bit palette-based, 8 bit per pixel or 16
 bits per pixel? Especially the latter should not be transparently
 converted to 8 bits per pixel if encountered because it is a lossy
 transformation. As I see it right now you have to know the pixel format
 up front to instantiate the loader. I consider that bad design. You can
 only have true knowledge of the file contents after the image header
 were parsed. The same is generally true of most actually useful image
 formats out there.
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler.
I can understand your reasoning and this is why libraries like FreeImage make it very simple to get the image data converted to the format you want from an arbitrary input. What I'd like to see is more of an extension of the current mechanism: make it possible to query the data format of the image file. That way, the application can make a wiser decision on the format in which it wants to receive the data, but it always is able to get the data in a format it understands. The format description for the file format would have to be quite complex to cover all possibilities, though. The best that I can come up with is a list of tuples of channel names (as strings) and data type (as enums). Processing those isn't fun, though.
The problem here is simple. You must know what color type you are going to be working with. There is no guessing. If you want to change to match the file loader better, you'll have to load it twice and then you have to understand the file format internals a bit more. This is kinda where it gets messy. But, would it be better if you could just parse the headers? So it doesn't initialize the image data. I doubt it would be all that hard. It's just disabling a series of features.
 - Could support for image data alignment be added by defining a new
 ImageStorage subclass? The actual in-memory data is not exposed to
 direct access, is it? Access to the raw image data would be preferable
 for those cases where you know exactly what you are doing. Going through
 per-pixel access functions for large image regions is going to be
 dreadfully slow in comparison to what can be achieved with proper
 processing/filtering code.
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 - Also, uploading textures to the GPU requires passing raw memory blocks
 and a format description of sorts to the 3D API. Being required to
 slowly copy the image data in question into a temporary buffer for this
 process is not an adequate solution.
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way.
This is where the abstraction of ImageStorage with several possible implementations becomes iffy. The user is at the loader's mercy to hopefully hand over the right implementation type. I'm not sure I like that idea. This seems inconsistent with making the pixel format the user's choice. Why should the user have choice over one thing and not the other?
If the image loader uses another image storage type then it is miss behaving. There is no excuse for it. Anyway the main thing about this to understand is that if the image loader does not initialize, then it would have to resize and since not all image storage types have to support resizing...
 Some very good points that I believe definitely needs to be touched
 upon where had.

 I've had a read of OpenImageIO documentation and all I can say is irkkk.
 Most of what is in there with e.g. tiles and reflection styles methods
 are out of scope out right as they are a bit specialized for my
 liking. If somebody wants to add it, take a look at the offset
 support. It was written as an 'extension' like ForwardRange is for
 ranges.
I mentioned OpenImageIO as this library is full-featured and very complete in a lot of areas. It shows what it takes to be as flexible as possible regarding the image data that is processed. Take it as a catalog of things to consider, but not as template.
 The purpose of this library is to work more for GUI's and games then
 anything else. It is meant more for the average programmer then
 specialized in imagery ones. It's kinda why I wrote the specification
 document. Just so in the future if somebody comes in saying its awful
 who does know and use these kinds of libraries. Will have to
 understand that it was out of scope and was done on purpose.
Having a specification is a good thing and this is why I entered the discussion. Although your specification is still a bit vague in my opinion, the general direction is good. The limitation of the scope looks fine to me and I'm not arguing against that. My point is rather that your design can still be improved to match that scope better.
Yeah indeed. Any tips for specification document improvement? I would love to make it standard for Phobos additions like this.
Jul 09 2015
next sibling parent reply =?UTF-8?B?Ik3DoXJjaW8=?= Martins" <marcioapm gmail.com> writes:
On Thursday, 9 July 2015 at 15:05:12 UTC, Rikki Cattermole wrote:
 On 10/07/2015 2:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
 <gregormueckl gmx.de>" wrote:
 On Thursday, 9 July 2015 at 04:09:11 UTC, Rikki Cattermole 
 wrote:
 On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?=
 <gregormueckl gmx.de>" wrote:
 [...]
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
Hm... in that case you introduce transparent mappings between user-facing types and the internal mapping which may be lossy in various ways. This works, but the internal type should be discoverable somehow. This leads down a similar road to OpenGL texture formats: they have internal storage formats and there's the host formats to/from which the data is converted when passing back and forth. This adds a lot of complexity and potential for surprises, unfortunately. I'm not entirely sure what to think here.
Internal color to an image storage type is well known at compile time. Now SwappableImage that wraps another image type. That definitely muddies the water a lot. Since it auto converts from the original format. Which could be, you know messy. It's actually the main reason I asked Manu for a gain/loss precision functions. For detecting if precision is being changed. Mostly for logging purposes.
 [...]
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
Yes, this is a slightly special use case. I can think of quite a lot of cases where you would want border regions of some kind for what you are doing, but they are all related to rendering and image processing.
You have convinced me that I need to add a subimage struct which is basically SwappableImage. Just with offset/size different to original.
 [...]
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler.
I can understand your reasoning and this is why libraries like FreeImage make it very simple to get the image data converted to the format you want from an arbitrary input. What I'd like to see is more of an extension of the current mechanism: make it possible to query the data format of the image file. That way, the application can make a wiser decision on the format in which it wants to receive the data, but it always is able to get the data in a format it understands. The format description for the file format would have to be quite complex to cover all possibilities, though. The best that I can come up with is a list of tuples of channel names (as strings) and data type (as enums). Processing those isn't fun, though.
The problem here is simple. You must know what color type you are going to be working with. There is no guessing. If you want to change to match the file loader better, you'll have to load it twice and then you have to understand the file format internals a bit more. This is kinda where it gets messy. But, would it be better if you could just parse the headers? So it doesn't initialize the image data. I doubt it would be all that hard. It's just disabling a series of features.
 [...]
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 [...]
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way.
This is where the abstraction of ImageStorage with several possible implementations becomes iffy. The user is at the loader's mercy to hopefully hand over the right implementation type. I'm not sure I like that idea. This seems inconsistent with making the pixel format the user's choice. Why should the user have choice over one thing and not the other?
If the image loader uses another image storage type then it is miss behaving. There is no excuse for it. Anyway the main thing about this to understand is that if the image loader does not initialize, then it would have to resize and since not all image storage types have to support resizing...
 Some very good points that I believe definitely needs to be 
 touched
 upon where had.

 I've had a read of OpenImageIO documentation and all I can 
 say is irkkk.
 Most of what is in there with e.g. tiles and reflection 
 styles methods
 are out of scope out right as they are a bit specialized for 
 my
 liking. If somebody wants to add it, take a look at the offset
 support. It was written as an 'extension' like ForwardRange 
 is for
 ranges.
I mentioned OpenImageIO as this library is full-featured and very complete in a lot of areas. It shows what it takes to be as flexible as possible regarding the image data that is processed. Take it as a catalog of things to consider, but not as template.
 The purpose of this library is to work more for GUI's and 
 games then
 anything else. It is meant more for the average programmer 
 then
 specialized in imagery ones. It's kinda why I wrote the 
 specification
 document. Just so in the future if somebody comes in saying 
 its awful
 who does know and use these kinds of libraries. Will have to
 understand that it was out of scope and was done on purpose.
Having a specification is a good thing and this is why I entered the discussion. Although your specification is still a bit vague in my opinion, the general direction is good. The limitation of the scope looks fine to me and I'm not arguing against that. My point is rather that your design can still be improved to match that scope better.
Yeah indeed. Any tips for specification document improvement? I would love to make it standard for Phobos additions like this.
Like Gregor, I think it's unreasonable to do any automatic conversions at all without being ask to do. This will greatly reduce the usability of this library. We need to solve the problem of getting from a file format on disk into a color format in memory. I can get from an image that I have already stored and preprocessed in a format I like, and I want to get it as quickly as possibly into a GPU buffer. Similarly, there are many use cases for an image library that do not touch individual pixels at all, so doing any sort of conversion at load time is basically telling those people to look elsewhere, if they care about efficiency. The most efficient way is a low-level 2-step interface: 1. Open the file and read headers (open from disk, from a memory buffer, or byte range) - At this point, users know color format width, and image dimensions, so they can allocate their buffers, check what formats the GPU supports or just otherwise assess if conversion is needed. 2. Decode into a user supplied buffer, potentially with color format conversion, if requested. This is important. At this point, we have a buffer with known dimensions and color format. Some very useful manipulations can be achieved without knowing anything about the color format, except for the bit-size. Examples are flipping, rotations by a multiple of PI/2, cropping, etc... On top of this, one can create all sorts of easy to use functions for all the remaining use cases, but this sort of low level access is really important for any globally useful library. Some users just cannot afford any sort of extra unnecessary copying and or conversions. I also think we should be able to load all the meta information on demand. This is extremely valuable, but the use-cases are so diverse that it doesn't make sense to implement more than just discovering all this meta-data and letting users do with it what they will. The most import thing is to get the interface right and lightweight. If we can get away with no dependencies then it's even better.
Jul 09 2015
parent "rikki cattermole" <alphaglosined gmail.com> writes:
I drafted a reply to this, then had time to think about it. So 
new answer here.

On Thursday, 9 July 2015 at 16:10:28 UTC, Márcio Martins wrote:
 On Thursday, 9 July 2015 at 15:05:12 UTC, Rikki Cattermole 
 wrote:
 On 10/07/2015 2:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
 <gregormueckl gmx.de>" wrote:
 On Thursday, 9 July 2015 at 04:09:11 UTC, Rikki Cattermole 
 wrote:
 On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?=
 <gregormueckl gmx.de>" wrote:
 [...]
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
Hm... in that case you introduce transparent mappings between user-facing types and the internal mapping which may be lossy in various ways. This works, but the internal type should be discoverable somehow. This leads down a similar road to OpenGL texture formats: they have internal storage formats and there's the host formats to/from which the data is converted when passing back and forth. This adds a lot of complexity and potential for surprises, unfortunately. I'm not entirely sure what to think here.
Internal color to an image storage type is well known at compile time. Now SwappableImage that wraps another image type. That definitely muddies the water a lot. Since it auto converts from the original format. Which could be, you know messy. It's actually the main reason I asked Manu for a gain/loss precision functions. For detecting if precision is being changed. Mostly for logging purposes.
 [...]
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
Yes, this is a slightly special use case. I can think of quite a lot of cases where you would want border regions of some kind for what you are doing, but they are all related to rendering and image processing.
You have convinced me that I need to add a subimage struct which is basically SwappableImage. Just with offset/size different to original.
 [...]
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler.
I can understand your reasoning and this is why libraries like FreeImage make it very simple to get the image data converted to the format you want from an arbitrary input. What I'd like to see is more of an extension of the current mechanism: make it possible to query the data format of the image file. That way, the application can make a wiser decision on the format in which it wants to receive the data, but it always is able to get the data in a format it understands. The format description for the file format would have to be quite complex to cover all possibilities, though. The best that I can come up with is a list of tuples of channel names (as strings) and data type (as enums). Processing those isn't fun, though.
The problem here is simple. You must know what color type you are going to be working with. There is no guessing. If you want to change to match the file loader better, you'll have to load it twice and then you have to understand the file format internals a bit more. This is kinda where it gets messy. But, would it be better if you could just parse the headers? So it doesn't initialize the image data. I doubt it would be all that hard. It's just disabling a series of features.
 [...]
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 [...]
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way.
This is where the abstraction of ImageStorage with several possible implementations becomes iffy. The user is at the loader's mercy to hopefully hand over the right implementation type. I'm not sure I like that idea. This seems inconsistent with making the pixel format the user's choice. Why should the user have choice over one thing and not the other?
If the image loader uses another image storage type then it is miss behaving. There is no excuse for it. Anyway the main thing about this to understand is that if the image loader does not initialize, then it would have to resize and since not all image storage types have to support resizing...
 Some very good points that I believe definitely needs to be 
 touched
 upon where had.

 I've had a read of OpenImageIO documentation and all I can 
 say is irkkk.
 Most of what is in there with e.g. tiles and reflection 
 styles methods
 are out of scope out right as they are a bit specialized for 
 my
 liking. If somebody wants to add it, take a look at the 
 offset
 support. It was written as an 'extension' like ForwardRange 
 is for
 ranges.
I mentioned OpenImageIO as this library is full-featured and very complete in a lot of areas. It shows what it takes to be as flexible as possible regarding the image data that is processed. Take it as a catalog of things to consider, but not as template.
 The purpose of this library is to work more for GUI's and 
 games then
 anything else. It is meant more for the average programmer 
 then
 specialized in imagery ones. It's kinda why I wrote the 
 specification
 document. Just so in the future if somebody comes in saying 
 its awful
 who does know and use these kinds of libraries. Will have to
 understand that it was out of scope and was done on purpose.
Having a specification is a good thing and this is why I entered the discussion. Although your specification is still a bit vague in my opinion, the general direction is good. The limitation of the scope looks fine to me and I'm not arguing against that. My point is rather that your design can still be improved to match that scope better.
Yeah indeed. Any tips for specification document improvement? I would love to make it standard for Phobos additions like this.
Like Gregor, I think it's unreasonable to do any automatic conversions at all without being ask to do. This will greatly reduce the usability of this library.
We have very different views about being asked to convert. SwappableImage does it, if the color type is not the same. File exporters only do it if the fields in the header ask it to (or will). But file readers are the only use case right now that doesn't. What I've been sold on is separating out reading of headers from the reading of the data optionally. In other words, the general use case it'll just read and give data as normal. Along with auto convert capabilities. But for the use case you are saying it will read only headers (not templated as it does not need to know what color type you are using). Then you can read the file again (with data) using the appropriate color type. It'll be interesting how I solve this since they are intertwined a little too much. But hey void is a good color type internally right? Although a struct would probably be a better idea as it gives a nice name to the purpose. Or even an interface *shrugs*.
 We need to solve the problem of getting from a file format on 
 disk into a color format in memory. I can get from an image 
 that I have already stored and preprocessed in a format I like, 
 and I want to get it as quickly as possibly into a GPU buffer. 
 Similarly, there are many use cases for an image library that 
 do not touch individual pixels at all, so doing any sort of 
 conversion at load time is basically telling those people to 
 look elsewhere, if they care about efficiency.


 The most efficient way is a low-level 2-step interface:
 1. Open the file and read headers (open from disk, from a 
 memory buffer, or byte range)
 - At this point, users know color format width, and image 
 dimensions, so they can allocate their buffers, check what 
 formats the GPU supports or just otherwise assess if conversion 
 is needed.
 2. Decode into a user supplied buffer, potentially with color 
 format conversion, if requested. This is important.

 At this point, we have a buffer with known dimensions and color 
 format.
 Some very useful manipulations can be achieved without knowing 
 anything about the color format, except for the bit-size. 
 Examples are flipping, rotations by a multiple of PI/2, 
 cropping, etc...

 On top of this, one can create all sorts of easy to use 
 functions for all the remaining use cases, but this sort of low 
 level access is really important for any globally useful 
 library. Some users just cannot afford any sort of extra 
 unnecessary copying and or conversions.

 I also think we should be able to load all the meta information 
 on demand. This is extremely valuable, but the use-cases are so 
 diverse that it doesn't make sense to implement more than just 
 discovering all this meta-data and letting users do with it 
 what they will.

 The most import thing is to get the interface right and 
 lightweight.
 If we can get away with no dependencies then it's even better.
Jul 09 2015
prev sibling parent reply "Gregor =?UTF-8?B?TcO8Y2tsIg==?= <gregormueckl gmx.de> writes:
On Thursday, 9 July 2015 at 15:05:12 UTC, Rikki Cattermole wrote:
 On 10/07/2015 2:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
 <gregormueckl gmx.de>" wrote:
 On Thursday, 9 July 2015 at 04:09:11 UTC, Rikki Cattermole 
 wrote:
 On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?=
 <gregormueckl gmx.de>" wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole 
 wrote:
 Please destroy!
You asked for it! :) As a reference to a library that is used to handle images on a professional level (VFX industry), I'd encourage you to look at the feature set and interfaces of OpenImageIO. Sure, it's a big library and some of it is definitely out of scope for what you try to accomplish (image tile caching and texture sampling, obviously). Yet, there are some features I specifically want to mention here to challenge the scope of your design: - arbitrary channel layouts in images: this is a big one. You mention 3D engines as a targeted use case in the specification. 3D rendering is one of the worst offenders when it comes to crazy channel layouts in textures (which are obviously stored as image files). If you have a data texture that requires 2 channels (e.g. uv offsets for texture lookups in shaders or some crazy data tables), its memory layout should also only ever have two channels. Don't expand it to RGB transparently or anything else braindead. Don't change the data type of the pixel values wildly without being asked to do so. The developer most likely has chosen a 16 bit signed integer per channel (or whatever else) for a good reason. Some high end file formats like OpenEXR even allow users to store completely arbitrary channels as well, often with a different per-channel data format (leading to layouts like RGBAZ with an additional mask channel on top). But support for that really bloats image library interfaces. I'd stick with a sane variant of the uncompressed texture formats that the OpenGL specification lists as the target set of supported in-memory image formats. That mostly matches current GPU hardware support and probably will for some time to come.
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
Hm... in that case you introduce transparent mappings between user-facing types and the internal mapping which may be lossy in various ways. This works, but the internal type should be discoverable somehow. This leads down a similar road to OpenGL texture formats: they have internal storage formats and there's the host formats to/from which the data is converted when passing back and forth. This adds a lot of complexity and potential for surprises, unfortunately. I'm not entirely sure what to think here.
Internal color to an image storage type is well known at compile time. Now SwappableImage that wraps another image type. That definitely muddies the water a lot. Since it auto converts from the original format. Which could be, you know messy. It's actually the main reason I asked Manu for a gain/loss precision functions. For detecting if precision is being changed. Mostly for logging purposes.
I haven't encountered SwappableImage anywhere. Did I miss anything?
 - window regions: now this not quite your average image 
 format feature,
 but relevant for some use cases. The gist of it is that the 
 image file
 may define a coordinate system for a whole image frame but 
 only contain
 actual data within certain regions that do not cover the 
 whole frame.
 These regions may even extend beyond the defined image frame 
 (used e.g.
 for VFX image postprocessing to have properly defined pixel 
 values to
 filter into the visible part of the final frame). Again, the 
 OpenEXR
 documentation explains this feature nicely. Again, I think 
 this likely
 is out of scope for this library.
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
Yes, this is a slightly special use case. I can think of quite a lot of cases where you would want border regions of some kind for what you are doing, but they are all related to rendering and image processing.
You have convinced me that I need to add a subimage struct which is basically SwappableImage. Just with offset/size different to original.
Again, I unfortunately fail to follow. Sorry. I wanted to argue that window regions are out of scope, unlike subimages (which actually are completely separate images within the same file - they do not share a single coordinate system).
 My first point also leads me to this criticism:

 - I do not see a way to discover the actual data format of a 
 PNG file
 through your loader. Is it 8 bit palette-based, 8 bit per 
 pixel or 16
 bits per pixel? Especially the latter should not be 
 transparently
 converted to 8 bits per pixel if encountered because it is a 
 lossy
 transformation. As I see it right now you have to know the 
 pixel format
 up front to instantiate the loader. I consider that bad 
 design. You can
 only have true knowledge of the file contents after the 
 image header
 were parsed. The same is generally true of most actually 
 useful image
 formats out there.
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler.
I can understand your reasoning and this is why libraries like FreeImage make it very simple to get the image data converted to the format you want from an arbitrary input. What I'd like to see is more of an extension of the current mechanism: make it possible to query the data format of the image file. That way, the application can make a wiser decision on the format in which it wants to receive the data, but it always is able to get the data in a format it understands. The format description for the file format would have to be quite complex to cover all possibilities, though. The best that I can come up with is a list of tuples of channel names (as strings) and data type (as enums). Processing those isn't fun, though.
The problem here is simple. You must know what color type you are going to be working with. There is no guessing. If you want to change to match the file loader better, you'll have to load it twice and then you have to understand the file format internals a bit more. This is kinda where it gets messy.
Messy in what way? My conceptual view of the user code is that it could do either of two things: 1. ask for a image in a format the user specifies - the loader must meet that request exactly and do all required conversions 2. ask the loader for the image data format contained in the file, choose a representation that best matches it in the user's view and only then ask the loader as in 1. Note that the user gets exactly what he asked for in each case, but makes an informed decision in 2. only because he chose to.
 But, would it be better if you could just parse the headers? So 
 it doesn't initialize the image data. I doubt it would be all 
 that hard. It's just disabling a series of features.
That's what this would boil down to, I believe: a separate entry point for the loader that just parses the headers and returns file format information to the user, but not the image data itself. The user-facing interface would just be an extra interface method. Internally, it would share most of the code with the actual image loading code path.
 - Could support for image data alignment be added by 
 defining a new
 ImageStorage subclass? The actual in-memory data is not 
 exposed to
 direct access, is it? Access to the raw image data would be 
 preferable
 for those cases where you know exactly what you are doing. 
 Going through
 per-pixel access functions for large image regions is going 
 to be
 dreadfully slow in comparison to what can be achieved with 
 proper
 processing/filtering code.
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 - Also, uploading textures to the GPU requires passing raw 
 memory blocks
 and a format description of sorts to the 3D API. Being 
 required to
 slowly copy the image data in question into a temporary 
 buffer for this
 process is not an adequate solution.
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way.
This is where the abstraction of ImageStorage with several possible implementations becomes iffy. The user is at the loader's mercy to hopefully hand over the right implementation type. I'm not sure I like that idea. This seems inconsistent with making the pixel format the user's choice. Why should the user have choice over one thing and not the other?
If the image loader uses another image storage type then it is miss behaving. There is no excuse for it.
Am I looking at an old version? The PNG loader I see documented returns a specific ImageStorage implementation.
 Anyway the main thing about this to understand is that if the 
 image loader does not initialize, then it would have to resize 
 and since not all image storage types have to support 
 resizing...
OK, why do you abstract image storage at all? What's the rationale there? Storage formats other than the trivial row-by-row storage are only used in very few cases in my experience, mostly when it involves compression (compressed textures for games - effectively read only after compression) or certain kinds of out-of-core image processing. This seems to be mostly out of scope this library IMO.
 Some very good points that I believe definitely needs to be 
 touched
 upon where had.

 I've had a read of OpenImageIO documentation and all I can 
 say is irkkk.
 Most of what is in there with e.g. tiles and reflection 
 styles methods
 are out of scope out right as they are a bit specialized for 
 my
 liking. If somebody wants to add it, take a look at the offset
 support. It was written as an 'extension' like ForwardRange 
 is for
 ranges.
I mentioned OpenImageIO as this library is full-featured and very complete in a lot of areas. It shows what it takes to be as flexible as possible regarding the image data that is processed. Take it as a catalog of things to consider, but not as template.
 The purpose of this library is to work more for GUI's and 
 games then
 anything else. It is meant more for the average programmer 
 then
 specialized in imagery ones. It's kinda why I wrote the 
 specification
 document. Just so in the future if somebody comes in saying 
 its awful
 who does know and use these kinds of libraries. Will have to
 understand that it was out of scope and was done on purpose.
Having a specification is a good thing and this is why I entered the discussion. Although your specification is still a bit vague in my opinion, the general direction is good. The limitation of the scope looks fine to me and I'm not arguing against that. My point is rather that your design can still be improved to match that scope better.
Yeah indeed. Any tips for specification document improvement? I would love to make it standard for Phobos additions like this.
This is a really big can of worms, especially if you want to set standards for the future. You can do anything from informal and lax to extreme level of detail. For example, we require students to write specifications that include numbered lists of *all* functional and non-functional requirements for their (rather small) projects, which fill endless pages in the end. Additionally they have to describe the development tools and environment, the target audience, manual and automatic tests, and so on. It's the classic waterfall model with all the problems it has (don't ask - decision from the top...). In your case I'd start by defining a target audience and describing target use cases in a high-level manner (e.g. "identify the format and required loader for an image file", "load the image so that the user sees a representation which he understands", ...). Stating non-goals or non-features is a good thing to limit scope (your spec already mentions some). The list of use cases can be checked against relevance for the target audience. Also, the use cases can be translated into test cases of various kinds: - example snippets demonstrating ease of use of the design (or lack thereof) - automatic tests (e.g. unit tests) - manual tests of complex scenarios requiring user interaction (not relevant in this case, but e.g. for UI libraries) A very formal approach could number the use cases and tests and keep cross-references between them to make it easier to check for completeness. But for the most part, this would be tedious and boring for little gain.
Jul 12 2015
parent Rikki Cattermole <alphaglosined gmail.com> writes:
On 13/07/2015 1:43 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?= 
<gregormueckl gmx.de>" wrote:
 On Thursday, 9 July 2015 at 15:05:12 UTC, Rikki Cattermole wrote:
 On 10/07/2015 2:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?=
 <gregormueckl gmx.de>" wrote:
 On Thursday, 9 July 2015 at 04:09:11 UTC, Rikki Cattermole wrote:
 On 9/07/2015 6:07 a.m., "Gregor =?UTF-8?B?TcO8Y2tsIg==?=
 <gregormueckl gmx.de>" wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Please destroy!
You asked for it! :) As a reference to a library that is used to handle images on a professional level (VFX industry), I'd encourage you to look at the feature set and interfaces of OpenImageIO. Sure, it's a big library and some of it is definitely out of scope for what you try to accomplish (image tile caching and texture sampling, obviously). Yet, there are some features I specifically want to mention here to challenge the scope of your design: - arbitrary channel layouts in images: this is a big one. You mention 3D engines as a targeted use case in the specification. 3D rendering is one of the worst offenders when it comes to crazy channel layouts in textures (which are obviously stored as image files). If you have a data texture that requires 2 channels (e.g. uv offsets for texture lookups in shaders or some crazy data tables), its memory layout should also only ever have two channels. Don't expand it to RGB transparently or anything else braindead. Don't change the data type of the pixel values wildly without being asked to do so. The developer most likely has chosen a 16 bit signed integer per channel (or whatever else) for a good reason. Some high end file formats like OpenEXR even allow users to store completely arbitrary channels as well, often with a different per-channel data format (leading to layouts like RGBAZ with an additional mask channel on top). But support for that really bloats image library interfaces. I'd stick with a sane variant of the uncompressed texture formats that the OpenGL specification lists as the target set of supported in-memory image formats. That mostly matches current GPU hardware support and probably will for some time to come.
As long as the color implementation matches isColor from std.experimental.color. Then it's a color. I do not handle that :) The rest of how it maps in memory is defined by the image storage types. Any image loader/exporter can use any as long as you specify it via a template argument *currently*.
Hm... in that case you introduce transparent mappings between user-facing types and the internal mapping which may be lossy in various ways. This works, but the internal type should be discoverable somehow. This leads down a similar road to OpenGL texture formats: they have internal storage formats and there's the host formats to/from which the data is converted when passing back and forth. This adds a lot of complexity and potential for surprises, unfortunately. I'm not entirely sure what to think here.
Internal color to an image storage type is well known at compile time. Now SwappableImage that wraps another image type. That definitely muddies the water a lot. Since it auto converts from the original format. Which could be, you know messy. It's actually the main reason I asked Manu for a gain/loss precision functions. For detecting if precision is being changed. Mostly for logging purposes.
I haven't encountered SwappableImage anywhere. Did I miss anything?
It's only used for image formats right now directly.
 - window regions: now this not quite your average image format
 feature,
 but relevant for some use cases. The gist of it is that the image file
 may define a coordinate system for a whole image frame but only
 contain
 actual data within certain regions that do not cover the whole frame.
 These regions may even extend beyond the defined image frame (used
 e.g.
 for VFX image postprocessing to have properly defined pixel values to
 filter into the visible part of the final frame). Again, the OpenEXR
 documentation explains this feature nicely. Again, I think this likely
 is out of scope for this library.
Ugh based upon what you said, that is out of scope of the image loader/exporters that I'm writing. Also right now it is only using unsigned integers for coordinates. I'm guessing if it is outside the bounds it can go negative then. Slightly too specialized for what we need in the general case.
Yes, this is a slightly special use case. I can think of quite a lot of cases where you would want border regions of some kind for what you are doing, but they are all related to rendering and image processing.
You have convinced me that I need to add a subimage struct which is basically SwappableImage. Just with offset/size different to original.
Again, I unfortunately fail to follow. Sorry. I wanted to argue that window regions are out of scope, unlike subimages (which actually are completely separate images within the same file - they do not share a single coordinate system).
What I mean by subimage is basically a region of another image. Like a sprite in a sprite sheet. Same idea.
 My first point also leads me to this criticism:

 - I do not see a way to discover the actual data format of a PNG file
 through your loader. Is it 8 bit palette-based, 8 bit per pixel or 16
 bits per pixel? Especially the latter should not be transparently
 converted to 8 bits per pixel if encountered because it is a lossy
 transformation. As I see it right now you have to know the pixel
 format
 up front to instantiate the loader. I consider that bad design. You
 can
 only have true knowledge of the file contents after the image header
 were parsed. The same is generally true of most actually useful image
 formats out there.
The reasoning is because this is what I know I can work with. You specify what you want to use, it'll auto convert after that. It makes user code a bit simpler.
I can understand your reasoning and this is why libraries like FreeImage make it very simple to get the image data converted to the format you want from an arbitrary input. What I'd like to see is more of an extension of the current mechanism: make it possible to query the data format of the image file. That way, the application can make a wiser decision on the format in which it wants to receive the data, but it always is able to get the data in a format it understands. The format description for the file format would have to be quite complex to cover all possibilities, though. The best that I can come up with is a list of tuples of channel names (as strings) and data type (as enums). Processing those isn't fun, though.
The problem here is simple. You must know what color type you are going to be working with. There is no guessing. If you want to change to match the file loader better, you'll have to load it twice and then you have to understand the file format internals a bit more. This is kinda where it gets messy.
Messy in what way? My conceptual view of the user code is that it could do either of two things: 1. ask for a image in a format the user specifies - the loader must meet that request exactly and do all required conversions 2. ask the loader for the image data format contained in the file, choose a representation that best matches it in the user's view and only then ask the loader as in 1. Note that the user gets exactly what he asked for in each case, but makes an informed decision in 2. only because he chose to.
Think of it like this: void myfunc(string name) { import std.experimental.image.fileformats.png; ForwardRange!ubyte input = ...; auto headers = loadPNGHeaders(input.save()); if (headers.pixelType == PNGPixelType.RGB && headers.samples == 16) { write("output.png", loadPNG!RGB16(input.save()) .flipHorizontalRange .flipVerticalRange . ... copyTemporary!MyImage(allocator) .asPNG .toBytes); } else if (headers.pixelType == PNGPixelType.RGBA && headers.samples == 16) { write("output.png", loadPNG!RGBA16(input.save()) .flipHorizontalRange .flipVerticalRange . ... copyTemporary!MyImage(allocator) .asPNG .toBytes); } } Of course that was off the top of my head in more pseudo code then anything else. It's also late and I'm tired so excuse any wrong terminology.
 But, would it be better if you could just parse the headers? So it
 doesn't initialize the image data. I doubt it would be all that hard.
 It's just disabling a series of features.
That's what this would boil down to, I believe: a separate entry point for the loader that just parses the headers and returns file format information to the user, but not the image data itself. The user-facing interface would just be an extra interface method. Internally, it would share most of the code with the actual image loading code path.
Okay, I'm long since convinced its needed. So no worries here.
 - Could support for image data alignment be added by defining a new
 ImageStorage subclass? The actual in-memory data is not exposed to
 direct access, is it? Access to the raw image data would be preferable
 for those cases where you know exactly what you are doing. Going
 through
 per-pixel access functions for large image regions is going to be
 dreadfully slow in comparison to what can be achieved with proper
 processing/filtering code.
I ugh... had this feature once. I removed it as if you already know the implementation why not just directly access it? But, if there is genuine need to get access to it as e.g. void* then I can do it again.
 - Also, uploading textures to the GPU requires passing raw memory
 blocks
 and a format description of sorts to the 3D API. Being required to
 slowly copy the image data in question into a temporary buffer for
 this
 process is not an adequate solution.
Again for previous answer, was possible. No matter what the image storage type was. But it was hairy and could and would cause bugs in the future. Your probably better off knowing the type and getting access directly to it that way.
This is where the abstraction of ImageStorage with several possible implementations becomes iffy. The user is at the loader's mercy to hopefully hand over the right implementation type. I'm not sure I like that idea. This seems inconsistent with making the pixel format the user's choice. Why should the user have choice over one thing and not the other?
If the image loader uses another image storage type then it is miss behaving. There is no excuse for it.
Am I looking at an old version? The PNG loader I see documented returns a specific ImageStorage implementation.
It doesn't? SwappableImage is used in its place. Also it is wrapped in a PNGFileFormat because file format != image storage type. So while yes it behaves as an image. It is not an image primarily. As it includes the headers.
 Anyway the main thing about this to understand is that if the image
 loader does not initialize, then it would have to resize and since not
 all image storage types have to support resizing...
OK, why do you abstract image storage at all? What's the rationale there? Storage formats other than the trivial row-by-row storage are only used in very few cases in my experience, mostly when it involves compression (compressed textures for games - effectively read only after compression) or certain kinds of out-of-core image processing. This seems to be mostly out of scope this library IMO.
Your right. There is only three that really matter to most users. One of the primary users of the library is for game development. It is who I care personally care about. There is also a lot of overlap between efficient GUI toolkit's and game development for images. While implementations beyond the big three, horizontal/vertical scan lines and one dimensional array are out of scope, the extension isn't.
 Some very good points that I believe definitely needs to be touched
 upon where had.

 I've had a read of OpenImageIO documentation and all I can say is
 irkkk.
 Most of what is in there with e.g. tiles and reflection styles methods
 are out of scope out right as they are a bit specialized for my
 liking. If somebody wants to add it, take a look at the offset
 support. It was written as an 'extension' like ForwardRange is for
 ranges.
I mentioned OpenImageIO as this library is full-featured and very complete in a lot of areas. It shows what it takes to be as flexible as possible regarding the image data that is processed. Take it as a catalog of things to consider, but not as template.
 The purpose of this library is to work more for GUI's and games then
 anything else. It is meant more for the average programmer then
 specialized in imagery ones. It's kinda why I wrote the specification
 document. Just so in the future if somebody comes in saying its awful
 who does know and use these kinds of libraries. Will have to
 understand that it was out of scope and was done on purpose.
Having a specification is a good thing and this is why I entered the discussion. Although your specification is still a bit vague in my opinion, the general direction is good. The limitation of the scope looks fine to me and I'm not arguing against that. My point is rather that your design can still be improved to match that scope better.
Yeah indeed. Any tips for specification document improvement? I would love to make it standard for Phobos additions like this.
This is a really big can of worms, especially if you want to set standards for the future. You can do anything from informal and lax to extreme level of detail. For example, we require students to write specifications that include numbered lists of *all* functional and non-functional requirements for their (rather small) projects, which fill endless pages in the end. Additionally they have to describe the development tools and environment, the target audience, manual and automatic tests, and so on. It's the classic waterfall model with all the problems it has (don't ask - decision from the top...). In your case I'd start by defining a target audience and describing target use cases in a high-level manner (e.g. "identify the format and required loader for an image file", "load the image so that the user sees a representation which he understands", ...). Stating non-goals or non-features is a good thing to limit scope (your spec already mentions some). The list of use cases can be checked against relevance for the target audience. Also, the use cases can be translated into test cases of various kinds: - example snippets demonstrating ease of use of the design (or lack thereof) - automatic tests (e.g. unit tests) - manual tests of complex scenarios requiring user interaction (not relevant in this case, but e.g. for UI libraries) A very formal approach could number the use cases and tests and keep cross-references between them to make it easier to check for completeness. But for the most part, this would be tedious and boring for little gain.
Thank you. I'm definitely staring this to read later tomorrow when working on it on stream. It'll also help with dismissing the arguments regarding signed/unsigned vs set size/word size of the coordinate type.
Jul 12 2015
prev sibling next sibling parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Docs: http://cattermole.co.nz/phobosImage/docs/html/
 Source: http://cattermole.co.nz/phobosImage/

 If reviewing the code itself is too much of a hassel, please 
 review the specification document. 
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html
Some example code using this library would be much more concise than the above. Can you provide some?
Jul 09 2015
parent reply "rikki cattermole" <alphaglosined gmail.com> writes:
On Friday, 10 July 2015 at 04:26:49 UTC, Vladimir Panteleev wrote:
 On Monday, 6 July 2015 at 13:48:53 UTC, Rikki Cattermole wrote:
 Docs: http://cattermole.co.nz/phobosImage/docs/html/
 Source: http://cattermole.co.nz/phobosImage/

 If reviewing the code itself is too much of a hassel, please 
 review the specification document. 
 http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_specification.html
Some example code using this library would be much more concise than the above. Can you provide some?
I haven't really gotten to that point. But currently what I want it to support is: write("output.png", image .flipHorizontalRange .flipVerticalRange .rotate90Range .copyInto(new TheImage!RGB8(2, 2)) .asPNG().toBytes); I have an idea similar to copyInto except: .copyIntoTemporary!TheImage(allocator) // return SwappableImage!(ImageColor!TheImage)(allocator.make!TheImage, allocator); // auto ref return type Basically SwappableImage can already deallocate the original storage instance when it's destructor gets called. If provided with the allocator. Same with RangeOf.
Jul 09 2015
parent reply "Vladimir Panteleev" <vladimir thecybershadow.net> writes:
On Friday, 10 July 2015 at 04:35:51 UTC, rikki cattermole wrote:
 write("output.png", image
 .flipHorizontalRange
 .flipVerticalRange
 .rotate90Range
 .copyInto(new TheImage!RGB8(2, 2))

 .asPNG().toBytes);
Why the "Range" suffix for functions? Aren't most functions going to be lazy, and thus have a "Range" suffix? Does copyInto do resizing / colorspace conversion? Such operations should be explicit. If it doesn't, then most of that sub-expression is redundant... Have you looked at ae.utils.graphics.image.copy? If a second argument is not provided, an appropriate image is allocated automatically (on the heap though).
 I have an idea similar to copyInto except:
 .copyIntoTemporary!TheImage(allocator) // return 
 SwappableImage!(ImageColor!TheImage)(allocator.make!TheImage, 
 allocator); // auto ref return type
It took me a while to understand what SwappableImage does. The documentation could be improved. I think you should not be adding runtime polymorphism implicitly anywhere. There are virtual range interfaces in Phobos, yes, but wrapping is only done explicitly, as it should be here.
 Basically SwappableImage can already deallocate the original 
 storage instance when it's destructor gets called. If provided 
 with the allocator. Same with RangeOf.
I suggest to avoid conflating Phobos range terminology with 2D image views. The current naming scheme makes me think that it returns a range of rows, or some other range which is directly pluggable into std.algorith. What's wrong with "View"? I'd also like to ask why you decided to write a library from scratch instead of reusing existing ones. I've spent days solving certain design issues of ae.utils.graphics, it seems wasteful to me to discard that and duplicate the effort. If the license is the issue, I'd be happy to relicence it.
Jul 09 2015
parent "rikki cattermole" <alphaglosined gmail.com> writes:
On Friday, 10 July 2015 at 04:52:54 UTC, Vladimir Panteleev wrote:
 On Friday, 10 July 2015 at 04:35:51 UTC, rikki cattermole wrote:
 write("output.png", image
 .flipHorizontalRange
 .flipVerticalRange
 .rotate90Range
 .copyInto(new TheImage!RGB8(2, 2))

 .asPNG().toBytes);
Why the "Range" suffix for functions? Aren't most functions going to be lazy, and thus have a "Range" suffix?
There are three kinds of mutation functions per functionality. 1) Modifies an image in place and returns it 2) image.rangeOf.function 3) Takes in a PixelPoint input range Range suffix seemed like a good way to separate out which return a range and which don't.
 Does copyInto do resizing / colorspace conversion? Such 
 operations should be explicit. If it doesn't, then most of that 
 sub-expression is redundant...
Nope. It just copies a PixelPoint input range into an image storage type.
 Have you looked at ae.utils.graphics.image.copy? If a second 
 argument is not provided, an appropriate image is allocated 
 automatically (on the heap though).

 I have an idea similar to copyInto except:
 .copyIntoTemporary!TheImage(allocator) // return 
 SwappableImage!(ImageColor!TheImage)(allocator.make!TheImage, 
 allocator); // auto ref return type
It took me a while to understand what SwappableImage does. The documentation could be improved. I think you should not be adding runtime polymorphism implicitly anywhere. There are virtual range interfaces in Phobos, yes, but wrapping is only done explicitly, as it should be here.
In this case the only reason SwappableImage would be used is to use its lifetime management facilities. But hey you can manage it yourself if you want too...
 Basically SwappableImage can already deallocate the original 
 storage instance when it's destructor gets called. If provided 
 with the allocator. Same with RangeOf.
I suggest to avoid conflating Phobos range terminology with 2D image views. The current naming scheme makes me think that it returns a range of rows, or some other range which is directly pluggable into std.algorith. What's wrong with "View"?
As far as I am aware the range capabilities could be used with std.algorithm. There is nothing stopping you. It just uses PixelPoint!Color to represent a single pixel instead an instance of the color.
 I'd also like to ask why you decided to write a library from 
 scratch instead of reusing existing ones. I've spent days 
 solving certain design issues of ae.utils.graphics, it seems 
 wasteful to me to discard that and duplicate the effort. If the 
 license is the issue, I'd be happy to relicence it.
Licensing isn't an issue. I choose to write a new one to take advantage of std.experimental.color. It originally was meant to just test it. In fact I've been taking a lot of queues from ae.utils.graphics for this very reason. It is a very good piece of code and it already has solved may of the issues already. Although now it is also pretty much testing e.g. std.experimental.allocators too. I chose against reusing ae.utils.graphics as a base (same as e.g. dlib's and Devisualization.Image) because it would take a lot of work to get working with the newer Phobos to be modules. Almost as much as a new library would take. You were the only person I was really interested in opinion wise. Manu's is great, but you have already done this. You know what works and doesn't.
Jul 09 2015
prev sibling parent "Rikki Cattermole" <alphaglosined gmail.com> writes:
Bumpity bump.
Updated.

Includes most of the PNG implementation for loading of images.
http://cattermole.co.nz/phobosImage/docs/html/std_experimental_image_fileformats_png.html
I've also added a little bit more to the specification document.

Next stream I'll be tackling IDAT chunk loading (not something I 
exactly enjoy).
Jul 20 2015