How related are NakedObjects by Pawlson and Concept Oriented Design

This is the continuation of conversation we have had by email:

Hey professor

I just was rewatching this conference and it reminded me a lot of Concept Oriented Design:

The researcher Rirchard Pawson is showing of his idea of Expressive Systems.
And which he tries to summarize in a single sentence as:
'It means designing systems, using behaviorally complete objects,
that show directly to the user and not allowing any other form of interaction.
"
I very highly you watch the whole conference. But the first 20 minutes are crucial to see what that definition looks like.

And If you watch the whole conference you would see he agrees with you in many points.

Then after that recording, 10 years later he gave second talk
reflecting on what the system he was describing.

And the success he has had with it.

Please check both talks. I think you will find a kindred spirit…
and we can continue talking about Concept oriented design and
patterns that could implemnt them.

1 Like

Now I’ve found Pawson’s PHD Thesis here:
http://downloads.nakedobjects.net/resources/Pawson%20thesis.pdf

And I think you might find the prologue (yes that thesis has a prologue)
by Trygve Reenskaug (the original creator of MVC).
Professor Reenskaug makes a compelling argument for the concept of nakedobjects.

I have copy pasted the Prologue here on it’s entirety so that you can
take a look at it.
And decide if it’s aligned with Concept Oriented Design.
I think it is…
I think Concept Oriented Design > Domain Driven Design > naked objects > mvc.

Are part of a continuum of the same idea.
Create better systems for the users,
by using reflecting the mental model of the user in the system
development itself.

Here is the foreword.

Foreword by Trygve Reenskaug

(Author’s note: Prof. Reenskaug was the external examiner for this
thesis. One of the
pioneers of object-oriented programming, he is best known as the
inventor of the ModelView-Controller pattern. After the thesis was
accepted, he generously agreed to write a foreword to the
electronically-published version.)

The world’s shipbuilding industry went through a heavy modernization
program through the nineteen fifties and sixties. My colleague Kjell
Kveim invented a control unit for the numerical control of machine
tools such as flame cutters. The first unit was installed at the Stord
shipyard in 1960 and opened the field for integrated computer aided
design and manufacturing. The matching design system, Autokon, was
first deployed at the Stord Yard in 1963. Autokon was subsequently
adopted by most major shipyards around the world and was still in use
by the late nineties.

The purpose of Autokon was to empower the ship’s designers. It had a
central database holding important information about the shape of the
hull, the position of frames and decks, and the shapes of the parts.
There was a language that permitted the designers to specify parts in
shipbuilding terminology. There was a designer programming facility so
that they could specify families of similar parts.

Autokon was developed in close collaboration between shipbuilders and
scientists. We considered ourselves as tool builders; the success
criterion was that the tools should be handy, practicable, serviceable
and useful. The success and longevity of Autokon was no doubt because
it was human-centric, reflecting the nature of shipbuilding and the
everyday life of the shipbuilder.

In another part of the world, Douglas Engelbart worked for his vision
of using computers to augment the human intellect. This quote is from
1962:

By “augmenting human intellect” we mean increasing the capability
of a man to approach a complex problem situation, to gain
comprehension to suit his particular needs, and to derive solutions to
problems. . . Increased capability in this respect is taken to mean a
mixture of the following: more-rapid comprehension, better
comprehension, the possibility of gaining a useful degree of
comprehension in a situation that previously was too complex, speedier
solutions, better solutions, and the possibility of finding solutions
to problems that before seemed insoluble.
(Douglas C. Engelbart: Augmenting Human Intellect: A Conceptual Framework.
Stanford Research Institute, Menlo Park, Ca., October 1962)

Much later, in the seventies, I worked with a system for the
distributed planning and
control of shipbuilding operations. The goal was to create a system
that could be mastered by the users so that they could tailor it to
suit their individual needs without compromising the goals of the
enterprise as a whole. The key was that the users’ mental models
should correspond to the models built into the computer. One result of
this endeavour was the Model-View-Controller architecture that I
developed as a visiting scientist at Xerox PARC in 1978/79. Its
purpose was to bridge the gap between the user’s mind and the computer
held data. The centre of this solution was the Model that was a
representation of the user’s domain information. The View was an
editor that enabled the user to inspect and modify this information.
The Controller coordinated the capabilities of several Views making it
a comprehensive tool that the user applied in the performance of
specific tasks.

The original MVC was later modified in Smalltalk-80 to become a
technical solution that separated input, output and information. The
most important participant in the original MVC architecture, the
user’s mind, was somehow forgotten.

The original version of MVC was never published. In my naĂŻvetĂŠ, I believed that
everybody wanted to empower their users so that MVC was merely an
obvious solution to a common problem. I was wrong. There are two
traditions in the applications of computers; one is to employ the
computer to empower its users, and the other is to apply the computer
to control its users. I am sorry to say that the latter seems to be
prevalent in mainstream computing today. I have been told that in many
implementations of the “well known MVC paradigm”, the “C” is
implemented as a script controlling the user’s actions.

I can only speculate why our industry fails to give users what the
clearly need and want. There could be reasons related to
organizational culture, or they could be related to certain software
business models. A widespread myth is that current software is
inherently complex; so complex that ordinary people cannot possibly
understand it and that it is only reasonable to expect flaws.

Consider a forest with birds singing in the trees and flowers covering
its floor. We can
easily walk along its paths or you can be adventurous and make your
own paths. We can select any aspect of its complex ecosystem and study
it for your doctoral thesis. There is unlimited complexity, yet any
human can master it to suit his or her purposes. There is no reason
why a computer system should be more complex than a forest. I believe
that the current complexity is man-made, and that we can resolve it by
changing our approach to software development. We merely need to get
our priorities right and create the appropriate tools. If we decide to
build systems for people, then we will get systems that can be
mastered by people.

In the quarter century since the inception of MVC, there has been
little progress in
empowering the users. This is where Pawson’s work comes as a fresh
contribution in an otherwise drab market. If the original MVC had been
published at the time, Naked Objects would now appear as an important
extension and implementation of its ideas. As it is, the original MVC
was not published at the time and Richard Pawson’s Naked Objects
appear as an important and independent contribution.

The Naked Objects method and software give two important contributions
to the evolution of information system technology:

  • The first and foremost is that it augments the human mind in a way
    that conforms with Douglas Engelbart’s vision of 1962. In the
    seventies, Alan Kay and his group extended the augment idea with
    Smalltalk. Smalltalk was (and still is) a personal information
    environment entirely consisting of objects. The main idea was that the
    objects should be meaningful to the user and appear concrete by
    presenting themselves in an appropriate way on a screen or in a
    loudspeaker so that the user could observe them and manipulate them.
    Pawson brings this idea a significant step forward. Where Smalltalk
    focuses on individual objects, Pawson concentrates on the domain model
    as a structure of interrelated, behaviourally complete objects. User
    and programmer work together to ensure that the manifest model in the
    computer faithfully models the user’s mental model of the domain. The
    first field study was a project with the DSFA. One of the premises was
    that “The DSFA is committed to moving away from conventional
    assembly-line approach to claims processing, where each person
    performs a small step in the process, towards a model where more of
    its officers can handle a complete claim and appropriately-trained
    officers might in future handle all benefits for one customer.” Naked
    Objects strongly support this augmentation goal.

  • The second important contribution is that Pawson lets the objects
    present themselves to the user in a standardized way. One advantage is
    that the user interface software can be generated automatically.
    Another is that the user gets into direct contact with the model since
    the objects are shown without being cluttered or camouflaged by fancy
    graphics. The result is systems that the users can feel at home in and
    master since they reflect the users’ professional knowledge directly
    without any unnecessary frills.

Neither Pawson nor I believe that the current Naked Objects represent
the end of the road. On the contrary, Naked Objects represent a new
beginning pointing towards a novel generation of human-centred
information systems.

Oslo, June 2004
Trygve Reenskaug

Then Professor Jackson answered with this message:

Thank you for sending me this. I’d come across Naked Objects a long time ago, but the idea is certainly one that I will revisit. I’m actually working on implementation paradigms for concept design, and as part of that, looking at the whole history of attempts to get better modularity.

One thing I’m a bit skeptical about—but admittedly I haven’t studied Pawson’s approach properly yet—is that it seems to be very object-oriented, and thus vulnerable to the criticisms that led to many different approaches that sought to decouple the different threads of functionality (what I would call concepts) that are often inherent in objects—this is what aspects, subject-oriented, role-oriented approaches etc tried to do.

Pawson seems to be opposed to the idea of separating out services, but I’m not sure how he’s able to avoid having an object hold all the concepts it might participate in. Perhaps in many cases that problem can be side-stepped by introducing new objects that actually represent concepts: for example, if I wanted to represent an Authentication concept I could introduce a class of authentication objects.

I’m curious to look into this, but curious to know your take…

Best,

Daniel

I agree with you that NakedObjects mostly tries to ignore “threads of functionality”. (If I understand that concept correctly).
Could we consider “threads of functionality” like a “process” and they are commonly represented by FSM (finite state machine).
Please correct me if I’m wrong, and I’ll go back to re-read the “essence of software”.

Anyway, NakedObjects tries to ignore “threads of functionality”.
Tries, but can’t doesn’t ignore it completely.
Even in the first video that I sent you.
He mentions by passing an icon called “Casses” that he explains what would normally be a workflow in a traditional script based process.
like the example he mentions of the taxi company.

I think there is a 3rd way. It isn’t the complete liberty of try to connect every object like naked objects.
nor is it the idea of having a workflow system based only on a script that doesn’t allow for the creative connection of objects.

that 3rd way is presented on this blog post by Bertrand Meyer:

The whole post is interesting. but the most interesting part is on section 3. Where he presents an example of an an INSURANCE_CLAIM.

The really interesting part is how he encodes the process,
as the invariant of the class:

invariant       — “⇒” is logical implication.

is_evaluated ⇒ is_investigated
is_reserved  ⇒ is_evaluated
is_resolved ⇒ is_agreed or is_imposed
is_agreed ⇒ is_evaluated
is_imposed ⇒ is_evaluated
is_imposed ⇒ not is_agreed
               — Hence, by laws of logic, is_agreed ⇒ not is_imposed

end

So the invarint warranties that no matter what happens, is_resolved can only happen if either is_agreed or is_impossed have been executed as part of the process.

Probably we could model the AUTHENTICATION example that you mention the same way, don’t you think?

PS:
Maybe it is worth modelling the OAuth authentication process.

Using Concept Oriented Design.
But using as notation Meyer’s Invariants (like in the INSURANCE_CLAIM).
I’ll think about it and write a blog post about it…
Who knows, maybe it could become a paper.

Hi @alejandro.garcia! Thanks for starting this very interesting thread.

Bertrand’s blog makes a lot of good points that I agree with, but I think he’s also mistaken in a few essential respects.

Object oriented?. I see why Bertrand uses the term OO here, but it’s really not about objects. His argument is about whether you specify behavior using scenarios or actions on states. Theoretically, these are equivalent, because any state machine can be defined either as a set of traces or as a transition relation (which can be partitioned amongst the actions). The approach that he favors, in which a state machine is specified by actions with pre- and post-conditions, has been the standard approach to specification since the 1970s for most specification languages (VDM, Z, B, Alloy, etc), and none of those languages are object-oriented. The scenario based approach has its intellectual precedents in CSP and in some early work that Parnas did on trace-based specs. I understand why Bertrand favors the term OO though, because it’s more accessible, and he has pioneered the use of pre- and post-conditions in an OO setting with his work on “design by contract.”

The core claim. Bertrand summarizes like this:

We will limit ourselves here to the core observation: logical constraints subsume sequential specifications; you can deduce the latter from the former, but not the other way around; and focusing on abstract logical specifications leads to a better understanding of the requirements.

This, in my view, is not quite right. As I discuss in my book (see Note 43), scenarios provide a much more understandable overview of a dynamic behavior. Just look at Bertrand’s example and see if you can figure out what an insurance claim process involves, and compare that to the simple scenario. It’s just not that easy to infer traces in your head from pre- and post-conditions.

More fundamentally, I argue in my book that there’s a respect in which scenarios cannot be derived from actions. It’s true that the set of all possible scenarios can be derived; those are just the traces of the state machine that the actions define, and they can be automatically enumerated (Alloy will do this for you). But the point of scenario specification is to identify some key scenarios as being the essential ones for the design. These scenarios typically do not use all the actions, and they are chosen to show how the purpose is fulfilled.

For example, in my operational principle for the Style concept, the scenario involves creating a style, assigning it to two elements (paragraphs, say) and then updating the style. If you don’t assign it to more than one element, you’re not demonstrating how the purpose is fulfilled (since the purpose is precisely to let you update the styling of multiple elements at once). On the other hand, the scenario does not involve deleting a style, because not a motivating part of the design. I would wager that without this scenario someone who had not seen Style before would not be able to figure out what it’s for. (This argument, btw, is inspired by Michael Polanyi’s argument for the operational principle, and his explanation of how knowing all the scientific properties of a device, eg a clock, is not enough to understand how it works—you need a scenario that puts it all together and demonstrates the purpose.)

In case you might think that a delete action is always likely to be unimportant, note that in the AuthToken concept (in which a user generates tokens for other users to access a resource), the action that revokes the token (a kind of deletion) is essential to the operational principle, since it’s revocation that motivates the whole design!

Separating concerns. It’s interesting that Bertrand’s main example of over specification in the scenario is that the actions for settings a reserve and evaluating a claim might occur in a different order. I suspect this is because the management of the reserve, and the settling of claims, might actually be two different aspects of functionality. Note that insurers set reserves even for events that do not yet have claims associated with them. From a concept design point of view, I would start from the assumption that Claim and Reserve are distinct concepts, and connected only by syncs. Then the lack of ordering becomes very natural and is no longer a subtle property of the specification.

Agreeing with Bertrand. I do agree with Bertrand in some key respects though. For specifying the details of behavior, actions are much more succinct, and having to enumerate all use cases is very tedious. That’s why concept behavior is specified with actions and states (see the tutorials about state machines and concept states). On the other hand, highlighting one or two essential scenarios (the operational principle) lets you explain very compellingly how the action design meets the purpose.

Are objects enough? Back to the question you started with, which Pawson raises: can you describe intended functionality purely in terms of objects? In some trivial sense you can, because as we’ve noted, you can turn any process into an object. But in this case, objects seem to be an unnatural fit. In Bertrand’s example, he ignores the part of the scenario in which the claim is assigned to an adjuster. Where does that go? Perhaps Bertrand and Pawson would say that you can put that in a Clerk object, but that seems likely to end in trouble, since clerks presumably do all kinds of other things unrelated to this.

Here’s a question I have about objects and their ability to represent higher level design notions.

In most systems, you have objects that correspond to domain entities. But you often need functionality that locates particular objects, and it’s not clear to me where this functionality goes in the extreme OO approach.

For example, in an authentication scheme, we might have User objects and Session objects. But when a user enters a username and password, the system has to check that those credentials are correct, and then create a session object. Where does this functionality go?

Of course you could create a singleton Authentication class whose object does this, but that object doesn’t correspond to anything in the domain. It’s just a container for some functionality. The same issue arises in most concepts. Take Label for example, in which items are assigned labels and then retrieved based by filtering on labels. Where does the code go that maintains the item-label mapping and performs the filtering?

In short, my view is that objects are a very good implementation structure, but an OO structure is not generally a good model of a domain, or a good way to structure functionality at the design level.

Thank you to Alejandro for pointing me at this discussion. I’d just like to make a few points:

  1. (Minor) the Prologue by Trygve Reenskaug was not a part of the thesis. Trygve was the external examiner. He wrote the Prologue, at my invitation, after the PhD had been awarded - just for the published version.)

  2. The ‘expressive systems’ video (2001) was from the very early days of the idea - it evolved matured a great deal since then.

  3. There definitely are ‘service objects’ in a Naked Objects implementation. The most common use of this is as a Repository or Factory (per Domain Driven Design use of that term). Other uses of a service object are to provide access to system services such as sending an email, or communication with an external system. All such services that a domain object needs are injected into the domain object using a dependency injection framework. What Naked Objects eschews are the concepts of services as application building blocks (i.e. any form of SOA), or services/threads that sit on top of the domain objects. Sure, there are persistent domain objects that have an intended of progression of state - even a customer’s Order has that - but these are still persistent domain entities. (Early on we made much use of Peter Coad’s Object Modelling in Color, where domain objects are classified into four - differently coloured - archetypes. A good but little known idea.)

  4. You might be interested to read this article that I published two years ago: From Naked Objects to Naked Functions. Systems I have written since that time use Naked Functions - because I became a convert to the functional programming paradigm. The structure of the resulting systems is really very similar - it is just that all the domain entities are immutable, and all the functions that apply to them are 100% pure functions (side effect free). The article explains how this works in practice on enterprise systems. The other big surprise is that the Naked Functions framework is ~92% the same as Naked Objects - in fact we factored out all the common elements into a more abstract ‘Naked Framework’ - it is only the Reflector that is different. It follows that the Naked Framework could be applied to other programming models, too, such as your ‘Naked Concepts’. (Sorry, but please don’t ask me to get involved - I am involved in another exciting education-related project now - not related to Naked* and this demands all my available time!).

In summary, the really important idea from my work on Naked Objects was that writing an application should consist solely of writing the domain layer, and that 100% of the UI, and the persistence layer, should be generated automatically from the domain model representation, by reflection. This is not just for efficiency of development and maintenance, but to ensure that your domain representation is following the language of users. As soon as you allow any form of customisation of the UI, the UI starts to part company from the domain representation, and the rot sets in.

Best wishes for your concept-oriented approach.

Richard

Thank you for the toughtfull response.
Let me work through it:

​Object oriented?… The approach that he favors, in which a state machine is specified by actions with pre- and post-conditions, has been the standard approach to specification since the 1970s for most specification languages (VDM, Z, B, Alloy, etc), and none of those languages are object-oriented.

Although in the example the FSM is specified with interactions between pre/post conditions and the invariants.

We could codify the whole FSM just in the invariants.

… scenarios provide a much more understandable overview of a dynamic behavior. Just look at Bertrand’s example and see if you can figure out what an insurance claim process involves, and compare that to the simple scenario. It’s just not that easy to infer traces in your head from pre- and post-conditions.

I agree that it’s difficult to visualize the simple scenario from the invariants.

But I love that:

  1. the whole FSM could be in a single place.
  2. since it’s in the invariant the FSM is warrantied to be respected during the execution of the program.

And although It’s difficult to create a trace of the process in my head,
It isn’t as bad as the current practice, which is to spread the FSM all over the code.

For example in the original example the invariants look like this:

invariant — “⇒” is logical implication.

is_evaluated ⇒ is_investigated
is_reserved ⇒ is_evaluated
is_resolved ⇒ is_agreed or is_imposed
is_agreed ⇒ is_evaluated
is_imposed ⇒ is_evaluated
is_imposed ⇒ not is_agreed

And just by flipping the arrows we can generate a state diagram like this:

@startuml
[*] --> is_investigated : investigate
is_investigated --> is_evaluated : evaluate
is_evaluated --> is_reserved : set_reserve
is_agreed --> is_resolved : resolve
is_imposed --> is_resolved : resolve
is_evaluated --> is_agreed : negotiate
is_evaluated --> is_imposed : impose
@enduml

Not too difficult,
and we can already there is something shady with that dangling is_reserved.
So I went to see the preconditions of is_agreed and is_impossed.

And discovered the FSM was incomplete in the invariant.
But with a couple modifications it was complete,
and this is the new diagram.

More fundamentally, I argue in my book that there’s a respect in which scenarios cannot be derived from actions. It’s true that the set of all possible scenarios can be derived; those are just the traces of the state machine that the actions define, and they can be automatically enumerated (Alloy will do this for you). But the point of scenario specification is to identify some key scenarios as being the essential ones for the design. These scenarios typically do not use all the actions, and they are chosen to show how the purpose is fulfilled.

Yeap… and this conversation we are having.
Has convinced me that scenarios must be documented as part of the code.
In a language like PyRet (by Brown University), they can be embedded in the where or check clauses. [pyret]
In some others they can be documented as part of their DocTests. [doctest]

In other languages they can be part of examples (Smallatalk’s Glamorous Toolkit [gtoolkit])
And in all the others as part of the unit test framework. (xUnit).

But still the code must be able to fulfill all the valid traces,
not just the “main sucess scenarios”.
That’s why I think their proper place is in a unit test.

Pawson in both of his talks makes the point [dcfa], that:
Most management systems are “script” based,
and therefore they deprive the users (clerks)
from agency to solve the problems of the customers.

By using NakedObjects,
i.e. allowing all valid threads of execution.
Then users/clerks have a software that enables them to be:

“problem solvers, not just process followers”.

(This argument, btw, is inspired by Michael Polanyi’s argument for the operational principle, and his explanation of how knowing all the scientific properties of a device, eg a clock, is not enough to understand how it works—you need a scenario that puts it all together and demonstrates the purpose.)

Yeap completely agree.
While I dislike most of the UML notation.
I think the Sequence Diagram of the UML, the most useful one.
Because it shows the Dynamic view, (i.e. how the clock parts interact with each other, in time).
Vs all the other diagrams which show only the static view (i.e. the parts of the clock).

In case you might think that a delete action is always likely to be unimportant, note that in the AuthToken concept (in which a user generates tokens for other users to access a resource), the action that revokes the token (a kind of deletion) is essential to the operational principle, since it’s revocation that motivates the whole design!

Agreed!
And we can document in the invariant of the AUTHENTICATION class.
That in order to revoke a token,
first the token must have been issued before.

There is no contradiction there.

And we can document the operational principle,
as a TestCase of the AUTHENTICATION class.

Separating concerns. It’s interesting that Bertrand’s main example of over specification in the scenario is that the actions for settings a reserve and evaluating a claim might occur in a different order. I suspect this is because the management of the reserve, and the settling of claims, are actually two different aspects of functionality. Note that insurers set reserves even for events that do not yet have claims associated with them. From a concept design point of view, I would start from the assumption that Claim and Reserve are distinct concepts, and connected only by syncs. Then the lack of ordering becomes very natural and is no longer a subtle property of the specification.

Pawson clamis in the second video,
that since the Clerks are directly exposed to the Domain Model,
they quickly dis-ambiguate the situation.
Are reservations before a claim, after or are they independent?

Clerks will let you know, as soon as they try to perform the task,
and the model doesn’t let me.
Which I find a fascinating proposition,
to quickly converge to a correct model.

Agreeing with Bertrand. I do agree with Bertrand in some key respects though. For specifying the details of behavior, actions are much more succinct, and having to enumerate all use cases is very tedious. That’s why concept behavior is specified with actions and states (see the tutorials about state machines and concept states). On the other hand, highlighting one or two essential scenarios (the operational principle) lets you explain very compellingly how the action design meets the purpose.

​So I’m converging to something like this:
All the behaviours must be modeled as part of FSM.
That will allow, all the threads of execution.
To be possible to be expressed in the system.

However, the happy path / main success scenario / operational principle.
Must be documented in an executable form (test-case).
So that we know for sure those most important threads of execution.
Are considered in the system.

Are objects enough? Back to the question you started with, which Pawson raises: can you describe intended functionality purely in terms of objects? In some trivial sense you can, because as we’ve noted, you can turn any process into an object. But in this case, objects seem to be an unnatural fit. In Bertrand’s example, he ignores the part of the scenario in which the claim is assigned to an adjuster. Where does that go? Perhaps Bertrand and Pawson would say that you can put that in a Clerk object, but that seems likely to end in trouble, since clerks presumably do all kinds of other things unrelated to this.

​Yes, it would go in the Clerk object.

-- A Clerk starts a Claim, and assigns it to an agent.

new_claim := AClerk.start_claim(with: AgentBob)

​I mean why wouldn’t it?
If a Clerk has many responsibilities in the real world,
then the Model should also reflect that reality.

After reading @RichardPawson 's article from: “NakedObjects to NakedFunctions”
I was even more confused, because NakedFunctions also looked a lot, to me,
like Concepts.
How could it be both?? Then it hit me:

Concepts, aren´t nouns like objects.
nor are they verbs, like functions.
they are gerunds!
(A verb that acts like a noun).

Or saying it another way:

If OOP is noun-verb:
post.upvote()
and functional is verb-noun:
upvote(post)
then Concept Oriented Programming is composing gerunds
upvoting a posting or
posting *with* upvoting

In english the -ing form has the advantage that we can think of it a as noun in the static view. (i.e. upvoting, tagging).
But as present participle in the dynamic view. (tagging a post) or (tagging a posting)

That’s why for me both NakedObjects and NakedFunctions…
seemed very close, but not quite, what Concepts are.

So, @RichardPawson is correct.
We would need to develop a NakedConcepts framework.
to reach the appropriate level of composability and representability of Concept Oriented Programming.