www.digitalmars.com         C & C++   DMDScript  

digitalmars.D - Exploring the philosophy of objects

reply forkit <forkit gmail.com> writes:
This is certainly not another thread about private to the class 
;-)

Please, don't try to make it into one.

But I'd encourage everyone, to a least consider, exploring the 
nature of 'objects' from a purely 'philosophical' approach (i.e. 
outside of your programming interests).

If it's not your thing, don't do it ;-)

If you come to the conclusion, its complete nonsense, that is 
your right.

For me, I have a particular understanding of objects (as many 
will now know).

i.e. I have a firm view in the 'autonomous existence' of objects.

It's the basis of my view, of the real world (nothing to do with 
programming).

That understanding though, while completely separate from 
programming, is well intergrated into my approach towards 
programming, and I personally believe, my programming is better 
for it, nor worse.

Here is something to get you started, on your journey:

'undermining, and overmining, objects'

https://youtu.be/P6yWc7ccb7g

There's also a great discussion about this, if you can get your 
hands on it ;-)

The Object Strikes Back: An Interview with Graham Harman.

https://www.tandfonline.com/doi/abs/10.2752/175470813X13491105785703

If you don't know who he is:

https://en.wikipedia.org/wiki/Graham_Harman
Jun 23 2022
next sibling parent reply monkyyy <crazymonkyyy gmail.com> writes:
On Friday, 24 June 2022 at 00:45:11 UTC, forkit wrote:
 But I'd encourage everyone, to a least consider, exploring the 
 nature of 'objects' from a purely 'philosophical' approach 
 (i.e. outside of your programming interests).
https://en.wikipedia.org/wiki/Object-oriented_ontology After careful consideration, I believe you've taken oo too far if you think it should affect anyone's meta physics.
Jun 23 2022
next sibling parent forkit <forkit gmail.com> writes:
On Friday, 24 June 2022 at 01:00:06 UTC, monkyyy wrote:
 On Friday, 24 June 2022 at 00:45:11 UTC, forkit wrote:
 But I'd encourage everyone, to a least consider, exploring the 
 nature of 'objects' from a purely 'philosophical' approach 
 (i.e. outside of your programming interests).
https://en.wikipedia.org/wiki/Object-oriented_ontology After careful consideration, I believe you've taken oo too far if you think it should affect anyone's meta physics.
that is your right. a right, btw, that comes about, because of your autonomous existence as an object ;-) Also, what you read or hear in your exploration into the philosophy of objects, is not necessarily refective of what I believe, or don't believe. Like you, as an object that exists autonomously, I have my own views ;-)
Jun 23 2022
prev sibling parent reply forkit <forkit gmail.com> writes:
On Friday, 24 June 2022 at 01:00:06 UTC, monkyyy wrote:
 https://en.wikipedia.org/wiki/Object-oriented_ontology

 After careful consideration, I believe you've taken oo too far 
 if you think it should affect anyone's meta physics.
Actually, I think in reverse. That is, its metaphysics, that ultimately drives the nature of software development, and programming languages. But the ending of Moore's Law is pushing us in the opposite different at the moment. And understandably. That's why newer languages are having a much greater focus on performance, perhaps at the expense of the abstract type?? But as Arnold once said: "I'll be back" (says the autonomous existing object). The movie 'The Terminator', is a movie about 'a possible future where mankind has been oppressed by artificially intelligent machines, lead by the rebellious computer system Skynet.' Nonetheless, we continue on that very path towards creating these very same AI 'objects'. I'd argue, that more than ever, software engineering focus is being directed towards the creation of 'autonomous existing objects'. Some of you already have them on your phones, in your TV, on your tablet, on your wrist....in your car...... Why? Well, as Timothy Morton once said: "We are caught in object-ive existence whether we like it or not." i.e. It's natural to think about our reality in terms of autonomous existing objects, so it's natural this will find it's way into other areas of our thinking, including programming. If you don't know who he is: https://en.wikipedia.org/wiki/Timothy_Morton Once this necessary refocus on extracting greater performance becomes less relevant in programming (some technological breakthough perhaps), we'll all be back focusing on creating automous existing objects again. It's how we think. Well, not all.
Jun 23 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 June 2022 at 01:59:07 UTC, forkit wrote:
 i.e. It's natural to think about our reality in terms of 
 autonomous existing objects, so it's natural this will find 
 it's way into other areas of our thinking, including 
 programming.
I once took a course that had the book «Computers in Context: The Philosophy and Practice of System Design» in the curriculum, which looks at systems development from the angle of a philsopher and a system development angle. Sadly, I don’t remember what it said about objects. But yes, both objects and class hierarchies are reflections of human thought processes. Functions too, and math. Mathematicians talk of «mathematical objects».
Jun 23 2022
prev sibling parent reply Zoadian <no no.no> writes:
On Friday, 24 June 2022 at 00:45:11 UTC, forkit wrote:
 This is certainly not another thread about private to the class 
 ;-)

 Please, don't try to make it into one.

 But I'd encourage everyone, to a least consider, exploring the 
 nature of 'objects' from a purely 'philosophical' approach 
 (i.e. outside of your programming interests).

 If it's not your thing, don't do it ;-)

 If you come to the conclusion, its complete nonsense, that is 
 your right.

 For me, I have a particular understanding of objects (as many 
 will now know).

 i.e. I have a firm view in the 'autonomous existence' of 
 objects.

 It's the basis of my view, of the real world (nothing to do 
 with programming).

 That understanding though, while completely separate from 
 programming, is well intergrated into my approach towards 
 programming, and I personally believe, my programming is better 
 for it, nor worse.

 Here is something to get you started, on your journey:

 'undermining, and overmining, objects'

 https://youtu.be/P6yWc7ccb7g

 There's also a great discussion about this, if you can get your 
 hands on it ;-)

 The Object Strikes Back: An Interview with Graham Harman.

 https://www.tandfonline.com/doi/abs/10.2752/175470813X13491105785703

 If you don't know who he is:

 https://en.wikipedia.org/wiki/Graham_Harman
No disrespect, but this is a programming language forum, i'd really appreciate if it stayed that way. And to add something to the topic. objects are a great why to think about things. they are, as it turned out, not they great in programming. OOP leads to slow, hard to reason about code. it encurages mutable state, spread across the whole programm. it's essentially global variables all over again, just hidden behind a layer of obfuscation. It also makes testing a lot harder to actually implement. That being said, there are cases where it's ok to use, but imho they are incredibly rare. - Someone who works on hard realtime systems
Jun 24 2022
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 June 2022 at 13:48:35 UTC, Zoadian wrote:
 And to add something to the topic. objects are a great why to 
 think about things. they are, as it turned out, not they great 
 in programming.
 OOP leads to slow, hard to reason about code. it encurages 
 mutable state, spread across the whole programm. it's 
 essentially global variables all over again, just hidden behind 
 a layer of obfuscation.
 It also makes testing a lot harder to actually implement.
 That being said, there are cases where it's ok to use, but imho 
 they are incredibly rare.

 - Someone who works on hard realtime systems
What you are describing is not the nature of OO-modelling, but the consequences of people not knowing what they are doing, using the wrong implementation strategy, tooling etc. Objects are appropriate also in real time systems. Actors are objects on steroids and represents one valid approach to creating reliable real time systems. Coroutines have a strong relationship to objects and greatly reduces bug inducing tedium of creating state machines, again an improvement for real time systems if used by a critical thinker. Objects also makes it easier to create transactional systems that allows you to roll back, again a reliability advantage. No matter what domain you are in, if you deal with uncertainty then also you need a proven methodology and tools that suits the application domain you are working with. With no proven methodology, especially geared towards what you are doing, quality will become less predictable. If you have no need to maintain state modelling the environment, then of course, you would also not need objects and can choose dedicated tools with 100% verification of the implementation. Is that better? Yes clearly, when you can get away with it. The OP is relevant to D, because how people thing about objects and their representation affects language design and usage. You confirmed this yourself by stating that you dislike OOP because people are idiots and mess things up (which is the only reasonable interpretation of what you stated). But that is not a consequence of OO-modelling as a concept. It is a result of poor modelling or poor implementation or poor tooling.
Jun 24 2022
next sibling parent reply ryuukk_ <ryuukk.dev gmail.com> writes:
What's your use case? without this answer, object has no meaning

Programming computers have no concept of "objects", it is pure 
programmer's religion, something sold by scholars and forgotten 
companies to produce cheap engineers

Unity is back to  data oriented design because "objects" was 
found to be inefficient

D doesn't need them, metaprogramming, struct, function is all we 
need, it's our strength, and we should capitalize on that, and 
show the world that if they need alternative to "objects, they 
can find the right tools with D!


and a new compiler "Burst", Unity should have picked D long time 
ago, i still try to lobby for them to add support for proper C 
FFI, so we can properly use D there, but it's hard..
Jun 24 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 June 2022 at 14:30:05 UTC, ryuukk_ wrote:
 What's your use case? without this answer, object has no meaning
Sure, although I guess forkit means that we cannot avoid modelling in terms of objects (or classes) as the human brain is essentially a rather slow abstraction machine that to a large extent has a tendency to create broad sterotypes of what is needed to understand the real world (or in the case of simulation or game, a fictional world). Stereotypes ≈ classes.
 Programming computers have no concept of "objects", it is pure 
 programmer's religion, something sold by scholars and forgotten 
 companies to produce cheap engineers
I would rather say, whenever you encapsulate something you have a defacto object. Even in a relational database you have entities, which are corresponding to objects (you can also group them as classes). Those table entries for entities (keys) are defacto object representations. I don't see a contradiction between object-oriented-modelling, component-based-modelling or entity-relationship modelling, I view it as facets of the same. You can mix. The more modelling knowledge you have, the more cogntitive aids you have, the more likely you are to arrive at a good model. Then you can choose an implementation strategy that suits your use case (one important decision factor is whether you believe that the model has to change later or not). What is certain is that you need a model of the world you want to represent, be it the real world or a gaming world. And a strategy to represent it your chosen language/database.
 Unity is back to  data oriented design because "objects" was 
 found to be inefficient
Ok, so when you design a framework for others to use, and don't want them to know the inner workings of the framework, then you are willing to pay a rather high efficiency price to begin with. OO-models do not require you to put everything in the same record though. You can split off physics in a game from avatar and soforth. Also, component based design, which decouples entities by giving them identities (e.g. a number) instead of using the address, has some advantages over using pointers; you can more easily split the system and distribute it over many computers for instance. And if you create many similar systems, then it can make some reuse cases easier as you only deal with an integer-id. But it has a price. In terms of efficiency it can become worse or it can become better… it all depends on wether the decoupling makes sense or not. But the key point is that you don't have to choose between classic OO implementation or components. You can mix.
 D doesn't need them, metaprogramming, struct, function is all 
 we need, it's our strength, and we should capitalize on that, 
 and show the world that if they need alternative to "objects, 
 they can find the right tools with D!
Well, but surely structs would be more intuitive to use if they could inherit rather than using alias this? Even if you never go further than one level of inheritance, it is still beneficial. E.g. have one abstract Node struct that takes care of book-keeping and a bunch of structs that inherit from the Node that does the real work that your game cares about. Many ADT implementations use this strategy, and it is OOP. You can use other strategies, but they tend to become clunky and/or verbose. (I would also argue that template composition can be viewed as a form of object-model implementation, btw)

 and a new compiler "Burst", Unity should have picked D long 
 time ago, i still try to lobby for them to add support for 
 proper C FFI, so we can properly use D there, but it's hard..
Unity sound like fun, I don't know much about it, unfortunately.
Jun 24 2022
prev sibling parent reply Zoadian <no no.no> writes:
On Friday, 24 June 2022 at 14:11:51 UTC, Ola Fosheim Grøstad 
wrote:
 On Friday, 24 June 2022 at 13:48:35 UTC, Zoadian wrote:
 And to add something to the topic. objects are a great why to 
 think about things. they are, as it turned out, not they great 
 in programming.
 OOP leads to slow, hard to reason about code. it encurages 
 mutable state, spread across the whole programm. it's 
 essentially global variables all over again, just hidden 
 behind a layer of obfuscation.
 It also makes testing a lot harder to actually implement.
 That being said, there are cases where it's ok to use, but 
 imho they are incredibly rare.

 - Someone who works on hard realtime systems
What you are describing is not the nature of OO-modelling, but the consequences of people not knowing what they are doing, using the wrong implementation strategy, tooling etc. Objects are appropriate also in real time systems. Actors are objects on steroids and represents one valid approach to creating reliable real time systems. Coroutines have a strong relationship to objects and greatly reduces bug inducing tedium of creating state machines, again an improvement for real time systems if used by a critical thinker. Objects also makes it easier to create transactional systems that allows you to roll back, again a reliability advantage. No matter what domain you are in, if you deal with uncertainty then also you need a proven methodology and tools that suits the application domain you are working with. With no proven methodology, especially geared towards what you are doing, quality will become less predictable. If you have no need to maintain state modelling the environment, then of course, you would also not need objects and can choose dedicated tools with 100% verification of the implementation. Is that better? Yes clearly, when you can get away with it. The OP is relevant to D, because how people thing about objects and their representation affects language design and usage. You confirmed this yourself by stating that you dislike OOP because people are idiots and mess things up (which is the only reasonable interpretation of what you stated). But that is not a consequence of OO-modelling as a concept. It is a result of poor modelling or poor implementation or poor tooling.
I'm not claiming objects are generally bad. But OOP in general is a bad design strategy if you want efficient programs. objects bundle too many different variables together, so cache access is most certainly suboptimal. multithreading is another thing that's much harder to do when you start with an OOP model in mind. All these problems can be solved, at least to some extend, but IMHO it's not a good strategy for implementation. it's fine to model a system as OOP at first, restructure it to something more performant. But I don't think i have to reiterate the points against OOP here, there are numerous articles on the internet talking about it. Just to give you one example of a hilariously bad OOP implementation where you'd expect them to know better: chromium.
Jun 24 2022
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 June 2022 at 14:30:54 UTC, Zoadian wrote:
 I'm not claiming objects are generally bad. But OOP in general 
 is a bad design strategy if you want efficient programs.
The Simula class concept is to a large extent meant to support changing the model later. So it is sensible for simulation where you experiment (programming cost being higher than hardware costs). This is basically where inheritance comes in. You have a working model, then someobody tells you: we actually need to model not only Doctors, but also Nurses, AmbulanceDrivers etc. How quickly can you accomodate that wish? This is also a strength of relation-based databases, just add more tables.
 All these problems can be solved, at least to some extend, but 
 IMHO it's not a good strategy for implementation.
I don't think existing languages have good support for hardware. C++/C requires the compiler to use the same layout for fields in a record as the source. That is a bummer for inheritance in terms of cache-lines, but that is not an OO weakness, it is a language weakness. If we talk high level OO, then there should be no hard relationship between the code and the layout (e.g. it could broken down to structs of arrays or arrays of structs or components etc by the compiler based on hints or analysis). Main stream languages are very primitive still… Outside of Rust, many languages seems to be satisfied by adding a thin layer over LLVM…
 it's fine to model a system as OOP at first, restructure it to 
 something more performant.
Sure, you can implement an OO-model in C or assembly. Most games of the 80s probably had some ad hoc OO-model implemented in clever ways in assembly to pack it into 32/64K RAM :-)
Jun 24 2022
next sibling parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 June 2022 at 15:14:44 UTC, Ola Fosheim Grøstad 
wrote:
 The Simula class concept is to a large extent meant to support 
 changing the model later. So it is sensible for simulation 
 where you experiment (programming cost being higher than 
 hardware costs). This is basically where inheritance comes in. 
 You have a working model, then someobody tells you: we actually 
 need to model not only Doctors, but also Nurses, 
 AmbulanceDrivers etc. How quickly can you accomodate that wish? 
 This is also a strength of relation-based databases, just add 
 more tables.
On a personal note: I also use have a default preference for OO when that makes it possible to avoid documentation because the implementation has a 1-1 correspondence with the model. Then you don't have to deal with out-of-date documentation and the boring task of writing documentation. This is also desirable when you evolve software rather than plan it (experimental prototyping or simulation). If you expect a lot of successive changes then documentation becomes a real burden.
Jun 24 2022
prev sibling parent reply user1234 <user1234 12.de> writes:
On Friday, 24 June 2022 at 15:14:44 UTC, Ola Fosheim Grøstad 
wrote:
 I don't think existing languages have good support for 
 hardware. C++/C requires the compiler to use the same layout 
 for fields in a record as the source.
Please show me an alternative that allows to respect the substitution principle.
Jun 24 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Friday, 24 June 2022 at 18:08:15 UTC, user1234 wrote:
 On Friday, 24 June 2022 at 15:14:44 UTC, Ola Fosheim Grøstad 
 wrote:
 I don't think existing languages have good support for 
 hardware. C++/C requires the compiler to use the same layout 
 for fields in a record as the source.
Please show me an alternative that allows to respect the substitution principle.
The simple approach is to reserve space for the fields you want to be on the same cache line. The root cause for the C++ model is separate compilation. If the compiler knows the full class hierarchy then it can choose a different layout. That is obvious.
Jun 24 2022
prev sibling next sibling parent reply forkit <forkit gmail.com> writes:
On Friday, 24 June 2022 at 13:48:35 UTC, Zoadian wrote:
 No disrespect, but this is a programming language forum, i'd 
 really appreciate if it stayed that way.
Actually, I was under the impression a moderator had locked this thread, as they decided it was off topic. But before they do that again, let me say this: (1) Philosophy is not a religion. It is a tool. (2) Philospohy and Programming/Programming Languages, are deeply 'entangled' whether you have the capacity to see to that level of detail, or not ;-) (3) the statement item (2) above is demonstrated throughout academic literature on computing. It's not something I just made up ;-) I was watching this fascinating talk by Matt Godbolt last night. 'CppCon 2018: Matt Godbolt “The Bits Between the Bits: How We Get to main()' https://www.youtube.com/watch?v=dOfucXtyEsU It's like he's deep within the quantum world of computing ;-) Of course, it gets even more quantum than that too... and seemingly, just keeps going... to what end .. I do not know.
Jun 24 2022
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 June 2022 at 00:24:08 UTC, forkit wrote:
 (2) Philospohy and Programming/Programming Languages, are 
 deeply 'entangled' whether you have the capacity to see to that 
 level of detail, or not ;-)

 (3) the statement item (2) above is demonstrated throughout 
 academic literature on computing. It's not something I just 
 made up ;-)
There is a lot truth to that. The reason being, of course, that the users of the software are more important than the programmers… so you need to extract what users' needs are, communicate about it and turn it into something that is implementable (by whatever means). And you invariably end up with focusing on objects, processes and communication. When I went to uni we were trained to do relational modelling with NIAM, also known as… [Object-Role Modelling](https://en.wikipedia.org/wiki/Object-role_modeling)! Yes, you read that right, we deal with *objects* when modelling tables. To quote that wiki page: «An object-role model can be automatically mapped to relational and deductive databases (such as datalog).» It can of course also be translated into an OOA model. One key difference is that OOA "clusters attributes", but Object-Role modelling is "free" of attributes and focus on relations.
Jun 25 2022
parent reply forkit <forkit gmail.com> writes:
On Saturday, 25 June 2022 at 20:06:45 UTC, Ola Fosheim Grøstad 
wrote:
 ....
 There is a lot truth to that. The reason being, of course, that 
 the users of the software are more important than the 
 programmers… so you need to extract what users' needs are, 
 communicate about it and turn it into something that is 
 implementable (by whatever means). And you invariably end up 
 with focusing on objects, processes and communication.
Actually the concept of programming is really, really simple: (1) You have objects (big and quantum size, they're all objects!). (2) You have interactions between objects (no object is an island) (3) You have processes whereby those interactions come about. (4) You have emergent behaviour (side effects if you will) - the program itself. The bigger the object, the more difficult it becomes to model it using quantum physics. The easier it becomes to understand the interactions, cause they're more visible (encapsulted if you will). The easier it becomes to identify the processes by which those interactions come about, cause they're more visible. And the easier it becomes to model what the emergent behaviour looks like, because it too is more visible. On the otherhand, the smaller the object, the harder it becomes to model it using the same object decomposition used with larger objects, the harder it becomes to understand the interactions, the harder it becomes to identify the processes by which those interactions come about, and the harder it becomes to model what the emergent behaviour looks like. The smaller the object gets, the less chance you have of understanding item (1), let alone items (2), (3), and (4). In the end, you end up with something like the linux kernel! It's just works. But nobody really knows why.
Jun 25 2022
parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 June 2022 at 23:11:18 UTC, forkit wrote:
 The bigger the object, the more difficult it becomes to model 
 it using quantum physics. The easier it becomes to understand 
 the interactions, cause they're more visible (encapsulted if 
 you will). The easier it becomes to identify the processes by 
 which those interactions come about, cause they're more 
 visible. And the easier it becomes to model what the emergent 
 behaviour looks like, because it too is more visible.
D is trying to position itself as a language where you start with a prototype and evolve it into a product. So you should ideally be able to start by composing generic (flexible and inefficient) components that give you an approximation of the end product. Yet representative enough to give the end user/developer enough of an idea of the final product to make judgments. Is D there yet? Probably not. It has ranges in its library and not a lot more. What is the core strategy for problem solving? Divide and conquer. Start with the big object (what you want the user to see) and then divide (break down) until you end up with something you have ready-mades to build an approximation of. The D eco system lack those ready-mades, that is ok for now, but there are two questions: 1. Can the D abstraction mechanisms provide ready mades that are easily configurable? 2. Is it easy to replace those ready mades with more performant structures? Can we say yes? Can we say no? Or maybe we just don't know. What is needed is a methodology with patterns. Only when we collect experience of developers using the methodology trying to apply those patterns can we judge what D is lacking with absolute clarity. What we have now is people with different unspoken ideas about development methodology based on personal experience and what they read on blogs. There is no "philosophy" for systems development that can drive the language evolution to something coherent. As such language evolution is driven by a process of replicating other languages by taking bits and pieces with no unifying thought behind it. Ideally a language would be designed to support a methodology geared to a specific set of use cases. Then you can be innovative and evaluate the innovations objectively. With no such methodology to back up the language design you end up randomly accruing features that are modified/distorted replications of features from other languages and it is difficult to evaluate if new features are supporting better development processes or if they create «noise» and «issues». It is also difficult to evaluate when your feature set is complete.
 On the otherhand, the smaller the object, the harder it becomes 
 to model it using the same object decomposition used with 
 larger objects, the harder it becomes to understand the 
 interactions, the harder it becomes to identify the processes 
 by which those interactions come about, and the harder it 
 becomes to model what the emergent behaviour looks like.

 The smaller the object gets, the less chance you have of 
 understanding item (1), let alone items (2), (3), and (4).

 In the end, you end up with something like the linux kernel!

 It's just works. But nobody really knows why.
Software development is basically an iterative process where you go back and forth between top-down and bottom-up analysis/development/programming. You have to go top down to find out what you need and can deliver, then you need to go bottom-up to meet those needs. Then you have to go back to the top-down and so on… iteration after iteration. So you need to both work on the big «objects» and the small «objects» at the same time (or rather in an interleaving pattern). Linux is kinda different. There was an existing well documented role model (Unix) with lots of educational material, so you could easily anticipate what the big and the small objects would be. That is not typical. There is usually not a need for a replica of something else (software development is too expensive for that). The only reason for there being a market for Linux was that there were no easily available free open source operating systems (Minix was open source, but not free). Interestingly, Unix is a prime example of reducing complexity by dividing the infrastructure into objects with «coherent» interfaces (not really, but they tried). They didn't model the real world, but they grabbed a conceptualisation that is easily understood by programmers: file objects. So they basically went with: let's make everything a file object (screen, keyboard, mouse, everything). Of course, the ideal for operating system design is the microkernel approach. What is that about? Break up everything into small encapsulated objects with limited responsibilities that can be independently rebooted. Basically OO/actors. (Linux has also moved towards encapsulation in smaller less privileged units as the project grew.)
Jun 26 2022
parent reply forkit <forkit gmail.com> writes:
On Sunday, 26 June 2022 at 07:37:01 UTC, Ola Fosheim Grøstad 
wrote:
 D is trying to position itself as a language where you start 
 with a prototype and evolve it into a product.
D, by design, defaults to 'flexibilty' (as Andrei Alexandrescu says in his book - The D Programming Language. I don't think it's unreasonable, for me to assert, that flexibiltiy does not exactly encourage 'structured design'. But for a language where you want to 'just write code' (which seems what most D users just want to do), then D's default makes complete sense. 2 examples: If safe were default, it would make it harder for people to 'just write code'. If private were private to the class, instead of private to the module, it would make it harder for people to 'just write code'. I'm not against the defaults, necessariy. Valid arguments can be made from different perspectives. I much prefer to focus on advocating for choice, rather than focusing on advocating for defaults. But in the year 2022, these defaults don't make sense any more - unless as stated, your aim is 'to just write code'. I think this is what D is grappling with at the moment. To do structured design in D, you have to make the conscious 'effort', to not accept the defaults. btw. Here's a great talk on 'A philosophy of software design', by John Ousterhout, Professor of Computer Science at Stanford University. The talk is more or less based on the question he asks the audience at the start of this talk. Having not studied computer science (I did psychology), I was surprised when he mentioned 'we just don't teach this' :-( https://www.youtube.com/watch?v=bmSAYlu0NcY
Jun 26 2022
next sibling parent bauss <jj_1337 live.dk> writes:
On Monday, 27 June 2022 at 01:35:59 UTC, forkit wrote:
 If  safe were default, it would make it harder for people to 
 'just write code'.
I don't think it would be necessarily harder, you'd just have to get used to a different starting point, but it should be fairly smooth. Just like today if you start your first module in your project with safe: then you essentially won't face many issues initially. Most issues will stem from things that aren't finished in D, but not from the general gist of safeD.
Jun 27 2022
prev sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 June 2022 at 01:35:59 UTC, forkit wrote:
 I don't think it's unreasonable, for me to assert, that 
 flexibiltiy does not exactly encourage 'structured design'.
Well, max implementation flexibility is essentially machine language. The 68000 instruction set is surprisingly pleasant. You can invent your own mechanisms that can make some tasks easier… but a nightmare for others to read. The more flexibility, the more chaos programmers will produce. You see this in JavaScript, TypeScript code tends to be much more structured. You see this in D code bases that allow string mixins too.
 To do structured design in D, you have to make the conscious 
 'effort', to not accept the defaults.
I don't think the defaults matter much.
 btw. Here's a great talk on 'A philosophy of software design', 
 by John Ousterhout, Professor of Computer Science at Stanford 
 University.

 The talk is more or less based on the question he asks the 
 audience at the start of this talk.

 Having not studied computer science (I did psychology), I was 
 surprised when he mentioned 'we just don't teach this' :-(
Too long… Are you suggesting that he said that they don't teach OO? OO is more tied to modelling/systems development than strict Computer Science though. Computer Science is a «messy» branch of discrete mathematics that is more theoretical than practical, but still aims to enable useful theory, e.g. algorithms. In Europe the broader umbrella term is «Informatics» which covers more applied fields as well as «Computer Science».
Jun 27 2022
parent reply forkit <forkit gmail.com> writes:
On Monday, 27 June 2022 at 12:18:10 UTC, Ola Fosheim Grøstad 
wrote:
 ...
 Too long… Are you suggesting that he said that they don't teach 
 OO? OO is more tied to modelling/systems development than 
 strict Computer Science though. Computer Science is a «messy» 
 branch of discrete mathematics that is more theoretical than 
 practical, but still aims to enable useful theory, e.g. 
 algorithms. In Europe the broader umbrella term is 
 «Informatics» which covers more applied fields as well as 
 «Computer Science».
No. He specifically is against promoting any 'particular' methodology. His 'philosophy' of programming (like mine), is 'try them all' (or at least try more than one, and ideally, one that is radically different to the other), so you can better understand the weakness and strengths, of each. Only by doing this, can you put yourself in a position to make better design choices. The talk (and his book) is about getting programmers to not just focus on code that works, because that strategy is very hard to stop once it starts ('tactical tornadoes' he calls those programmers, as they leave behind a wake of destruction, that others must clean up). This (he argues) is how systems become complicated. And sooner or later, these complexities *will* start causing you problems. I think what he is saying, is that most programmers are tactical tornadoes. He want to change this. The long-term structure of the system is more important, he argues. He is saying that CS courses just don't teach this mindset, which I found to be surprising. That's what he's trying to change. I like this comment from his book: "Most modules have more users than developers, so it is better for the developers to suffer than the users.".
Jun 27 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Monday, 27 June 2022 at 22:48:35 UTC, forkit wrote:
 No. He specifically is against promoting any 'particular' 
 methodology.
Yes, you choose methodology based on the scenario. A method uses many different techniques. The point of a method is to increase the successrate when faced with a similar situation, not to be generally useful.
 He is saying that CS courses just don't teach this mindset, 
 which I found to be surprising. That's what he's trying to 
 change.
Well, they do say it. You cannot teach beginners everything at once. Most students are beginners. So you need many different angles, spread over many courses. Given the amount of theory the time for practicing skills is very limited. You can teach students techniques, but you cannot teach them intuition, which takes decades.
 I like this comment from his book:

 "Most modules have more users than developers, so it is better 
 for the developers to suffer than the users.".
Yes, the common phrase is that code is read more frequently than written. Students however, feel they are done when the code runs. Only a small percentage are mature enough as programmers to refine their skills. And that top 10% doesn’t need the teacher... only the book and the assignment. Or rather, there are not enough resources. It takes time to mature ( measure in decades ) and in that time people will develop patterns. Only 5% of students are at a high level in programming IMHO. Anyway, in the real world projects are delayed and code is written under time pressure. To get a module «perfect» you need to do it more than once. Very few projects can afford that kind of perfection, nor do they want programmers to rewrite modules over and over. Perfection is not a goal for applications. Only libraries and frameworks can try to achieve perfection.
Jun 27 2022
prev sibling parent reply forkit <forkit gmail.com> writes:
On Friday, 24 June 2022 at 13:48:35 UTC, Zoadian wrote:
 ...
 And to add something to the topic. objects are a great why to 
 think about things. they are, as it turned out, not they great 
 in programming.
 OOP leads to slow, hard to reason about code. it encurages 
 mutable state, spread across the whole programm. it's 
 essentially global variables all over again, just hidden behind 
 a layer of obfuscation.
 It also makes testing a lot harder to actually implement.
 That being said, there are cases where it's ok to use, but imho 
 they are incredibly rare.

 - Someone who works on hard realtime systems
I think if we put these claims to the test, they would be found wanting. I'd also love to see, what a non-OOP program would look like, if one were completing this assignment, with the use of objects. Certainly possible, no doubt. You could do it in C. Hell, you could do it in Assembly. But would you? https://solidsoftware.com.au/Tool/Software/AI/Agents/TrafficAgents.html That assignment, is minisule, compared to software projects being untaken everyday, all around the world.
Jun 25 2022
next sibling parent forkit <forkit gmail.com> writes:
On Saturday, 25 June 2022 at 09:29:09 UTC, forkit wrote:
 I'd also love to see, what a non-OOP program would look like, 
 if one were completing this assignment, with the use of objects.
Correction: The previous post should say 'without the use of objects' ;-)
Jun 25 2022
prev sibling parent reply Paul Backus <snarwin gmail.com> writes:
On Saturday, 25 June 2022 at 09:29:09 UTC, forkit wrote:
 I think if we put these claims to the test, they would be found 
 wanting.

 I'd also love to see, what a non-OOP program would look like, 
 if one were completing this assignment, with the use of objects.
Probably the most common "non-OOP" way of organizing data is to use tables. You see tables most commonly in relational databases, but also in data science (where they go by the name "data frame" [1]) and in low-level code using so-called "data-oriented design" (where they go by the name "structure of arrays" [2]). Whether tables or objects are a better way of organizing data is a decades-old debate that I have no intention of wading into here. Regardless of which you prefer, you must admit that both tables and objects have a long history of successful use in real-world software. [1] https://www.oilshell.org/blog/2018/11/30.html [2] https://en.wikipedia.org/wiki/AoS_and_SoA
Jun 25 2022
next sibling parent reply Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 June 2022 at 18:05:31 UTC, Paul Backus wrote:
 Whether tables or objects are a better way of organizing data 
 is a decades-old debate that I have no intention of wading into 
 here.
What debate? You can easily represent objects in a relational database. One table for the superclass, another one for the subclass extension. The main discussion has really been about whether you want relational or hierarchical structure. The latter is better for performance. It is fairly easy to distribute objects or hierarchical, but relational is harder, yet very flexible. Keep in mind that XML databases often are implemented as relational tables, and you can represent objects as XML with ease… The real clash is really with lower level languages adding OOP, really improve a whole lot on the runtime organization over Simula. Which is sad. Still primitive in other words.
Jun 25 2022
parent Ola Fosheim =?UTF-8?B?R3LDuHN0YWQ=?= <ola.fosheim.grostad gmail.com> writes:
On Saturday, 25 June 2022 at 18:23:20 UTC, Ola Fosheim Grøstad 
wrote:
 The real clash is really with lower level languages adding OOP, 

 really improve a whole lot on the runtime organization over 
 Simula. Which is sad. Still primitive in other words.
In other words, C++ messed up OOP by exposing the implementation and thereby depriving the compiler from optimization opportunities. OOP does *not* dictate a particular organization in memory.
Jun 25 2022
prev sibling parent forkit <forkit gmail.com> writes:
On Saturday, 25 June 2022 at 18:05:31 UTC, Paul Backus wrote:
 ...
 Whether tables or objects are a better way of organizing data 
 is a decades-old debate that I have no intention of wading into 
 here. Regardless of which you prefer, you must admit that both 
 tables and objects have a long history of successful use in 
 real-world software.
When people see something as challenging their belief, they do tend to dig in, and turn it into a lonnnnggg debate ;-) But really, OO decompostion is just a tool. It's not an idealogy (although many throughout computing history have pushed it as such). It's just a tool. That is all it is. Nothing more. It's a tool you should have the option of using, when you think it's needed. A screw driver makes for a lousy hammer. Just pick the right tool for the job. If you're trying to model a virtual city, you'll almost have to use object decomposition. I mean, it makes complete sense that you would. Of course, you could model it using logic chips - but why would you? On the otherhand, if your writing a linker, it doesn't seem like OO decomposition would have any value whatsoever. People need to be more pragmatic about this. Programming paradigms are just tools. They should not be used as the basis for conducting idealogical warfare against each other ;-)
Jun 25 2022