Objects in Axum

When we talk about Axum as a programming language, we make the point that it is not an object-oriented language, but that it is still object-aware. What do we mean by this, and is it really true that you cannot define objects with Axum?

What we mean is that the core concept of Axum is not the “object” of object-oriented programming, but agents and domains. These could be viewed as objects, of course, but have so many constraints placed on them that anyone who is a fan of OO programming would protest against Axum as an OO language. It would also be obscuring the central point that we are trying to make. On the other hand, being a .NET language means floating on a sea of objects, so Axum must be aware of the underlying platform and its central paradigm, which is inescapably object-oriented.

Then, we usually say that “in fact, you can’t even define a class in Axum,” as if to prove the point that it’s not OO. This is true, there are no ways to define classes; however, there is a way to define types, which we call “schema.” In C++ jargon, a schema type would be called a POD, something that is less than a full-fledged object. We’ve heard from some that “schema” isn’t a good name for this, so if you have a better names that works well as a language keyword, too, then we’ll be all ears.

A schema is a .NET class which contains only public properties and side-effect-free methods and a new kind of member called a ‘rule.’

Schema types are intended for use as payload definitions for channel communication, and thus are guaranteed to be deeply cloneable. The compiler generates the clone code, which is about 100x faster than reflection-based cloning. Schema instances are also guaranteed to be serializable and therefore automatically suitable for inter-process communication.

Why do we need this? If you are familiar with distributed programming, a schema is just a data-transfer object type, but with language support. The original reason for DTOs was to cut down on round-trips across the network – calling setters and getters on a remote object wasn’t really feasible. For Axum, the reason is somewhat different – it’s another constraint placed on objects – we simply cannot trust that types implementing ICloneable are doing so in a deep fashion (there’s no formal requirement to do so).

We could have built a deep-clone runtime capability based on reflection, but that would be orders of magnitude slower than the compiler-generated clone that having language support allows us to rely on.

A simple Address schema for US addresses:

schema Address
    required String StreetAddress;
    required String City;
    required String State;
    required String ZipCode;

require ! String.IsNullOrEmpty(StreetAddress);
        require ! String.IsNullOrEmpty(City);
        require ! String.IsNullOrEmpty(State);
        require ! String.IsNullOrEmpty(ZipCode);
        require State.Length == 2;
        require ZipCode.Length == 5 || ZipCode.Length == 10;


Specifying rules for a schema is entirely optional, but can be a useful tool both because of the runtime enforcement that it provides and for the additional information it provides the reader of the source code with. The rules are enforced when you send data to a channel port; they may only involve calls to methods that are known to be side-effect free.

Schema are versionable, meaning that the version of the schema that you use to write a serialized object and the one you use to de-serialize don’t have to be exactly the same. When de-serializing a schema instance from a stream, only the required properties need to be found in the stream; the schema may also contain a number of optional properties, which, if not present, will be given default values.

If an optional field is present in the stream but not recognized by the target schema type, the data is stored in a private data structure so that the instance can be re-serialized without losing the information.

Schema types are really simple – everything (except the type itself) is public, methods must be side-effect free, and the property definitions look like fields, i.e. you don’t get to define the implementation. Schema rules are invoked by the runtime.

We’ve discussed internally whether schema instances ought to be immutable, a property that would have all kinds of nice implications, but the code in the CTP that we hope to announce soon on this blog does not treat them as immutable. This is one area where getting feedback would be very valuable to us – should our transfer objects be immutable? In J2EE, for example, there is no strong recommendation one way or the other, but I’m thinking we should be a bit more specific.

I’m also thinking that we need to add compiler-generated equality and hash-key functions to make sure that schema have value rather than reference equality semantics. Clearly, schema types are by no means a finalized concept…

Niklas Gustafsson

Comments (29)

  1. Michael Hart says:

    A big definite yes for immutability from me. The more the better.

    And please include it in the CTP!

  2. Steve Bjorg says:

    It should have value-type semantics where local modifications are not visible to others unless communicated back (at which point the rules are run).  Of course, it should be fast as reference-type values for passing around. In other words, it should be a bit like memory-transacted objects.

    Since you’re in control of the whole data flow experience, I would think this would be easy to accomplish.  Each schema has a corresponding "journal" helper that keeps track of locally modified fields.  When it comes time to send the schema instance to someone else, the journal creates a new immutable instance using the original fields and the modified fields tracked by the journal.  This approach provides the benefit that instance fields can be read and written to as if the instance were mutable.  Also, it creates clear semantics on when the schema rules/invariants are triggered: namely when the journal creates the new instance.  This transition could be triggered at transmission time or some other boundary (e.g. a "modify" keyword similar to "using" in C# inside which an instance becomes journaled).

    Just some food for thought. 🙂

  3. tobi says:

    a definite no for immutability. because objects are cloned when they cross channels there is no risk in having a concurrent access/mutation of an object. the application can still treat a schema as immutable of course. what about having a keyword immutable that can be applies to schema definitions?

    immutability is very very restrictive in this case because just about every piece of data would be immutable. in order to update a single field you have to clone the instance and assign it to your reference variable. that is very unproductive.

    there are just too many cases where immutability is too limiting. again, you can treat objects as immutable in your code if you like (but not the other way round).

  4. Shawn B. says:

    How about making mutability optional?  I can see some times where you’d want an immutable DTO in which case it behaves more like a ValueObject but in the uses I have, it will be modified as it passes through the tiers but thread safety will not be a concern in the scenario.  I would absolutely hate to have large object graphs cloned everytime I set a property.  It would be more ideal if the person defining the schema could simply apply a "immutable schema Address" or something like "readonly" even and so on.  It would be better to give us the option on that perticular behavior or I wouldn’t be able to use it if it produces only immutable schemas.

  5. Michael Hart says:

    I’d like to hear a good case for wanting to set a property on a transfer object after it’s been instantiated. Sounds very smelly to me.

    The extra win with immutability is that the transfer object cannot enter an invalid state.

  6. Shawn B. says:

    Our object graphs represent many pieces of data.  It will pass through a chain of functions that populate pieces of it according to their rules and need before it is actually transferred and after it is transferred.

    True, there is one function that sets the data, but it is broken into many sub functions that populate portions of it.  Sometimes it makes it into the queue and when dequed, evaluated, used by an operation in some way, and then a status changes and it goes into another queue for further operaion.  DTO’s are not immutable (or rather I don’t see in its spec that it must be).  If it is therefore optional, then why force it?

    Because our volume is millions of these transactions daily and sometimes hourly, it is not feasible to make copies everytime we set a property when there are hundreds of them between the objects in the graph (customer, order[s], payment[s], etc.).  Refactoring is not an option.  It is what it is.  Our responses must be nearly instantanous too.  So there’s not need to clone post 500k object graphs every single operation.

    It is perfectly acceptable to pass through as many setter functions as we want, pass it through the wire, and change it some more without having to clone it (for memory and performance reasons).  I just optimized an unreliable function that cloned before writes and brought the process to its knees.  Not cloning fixed it beautifully.

    One more thing to keep in mind, is that every operation is stateless, so there is never anything else sharing it in our scenario.  Threading is not an issue here.  And where it is, we’ve dealt with it in other ways.  So it still isn’t an issue even then.

    It may smell funny to you, but it is the nature of our operation.  Because Transfer Objects are not required to be immutable one way or the other per definition, then it should not be forced upon us.  If MS leaves it optional to make immutable, then you are free to make all your schemas immutable, and I am free to leave mine mutable per our operational requirements/design/legacy implementation and all the compatibility that must come with it.

    Usually, I make my DT/VO’s immutable by design, but I see no good reason why it must be that way all the time.  If you want immutable, use ValueObject instead.  It is immutable by definition.

  7. Michael Hart says:

    I don’t understand why you don’t create new DTOs in your service though? All this talk of cloning seems like a red herring – why do you need to clone? Just reuse the same part of the graph if you’re not modifying it.

    500k object graphs per transaction and millions of transactions hourly huh? So you’re pushing through 140 MB/s… nice network!

    Might I suggest that your needs sound very very very specific. There are high volume systems that have been written in functional languages that don’t even have mutable constructs. Saying that you need immutability for performance is just being dishonest.

    And no one’s "forcing" Axum on you, just like you’re not "forced" to use the CLR if you really think you want to modify memory directly. Sometimes what may seem like restrictive design decisions will save you (and the gung ho developers that come after you, modifying your DTOs like there’s no tomorrow) many headaches down the track.

  8. tobi says:

    i cannot believe how fixated some people are around forcing everyone to best practices. immutability has its places but is not appropriate everywhere. repeat the previous sentence for any of the following: singleton, factory, 3-tier architecture, web service …

    there is no solution that fits everyone. are you stating that 95% of all software projects could be done more productively with mandatory immutability?

    "And no one’s "forcing" Axum on you, just like you’re not "forced" to use the CLR if you really think you want to modify memory directly". the goal of the axum team is to gain adoption not hinder it. your comment is pointless in this regard.

    "Might I suggest that your needs sound very very very specific." what is specific about constructing a dto across multiple methods? this scenario alone would be enough to justify optional immutability.

  9. Michael Hart says:

    Tobi, if the main goal of the Axum team was to gain adoption, then they’d just build a drag n’ drop designer. I mean, come on, YAGNI really plays a big part here.

    And you completely missed the point of my comment about Shawn’s needs being very specific and took it totally out of context – I was clearly talking about his argument of requiring mutability to support his 140MB/s architecture.

    I’m not sure exactly what you mean by "constructing a dto across multiple methods", but you might want to look into the Builder pattern (and the Builder itself could be mutable or immutable, depending on whatever floats your boat).

  10. Shawn B. says:

    I meant to say 50k DTO’s serialized, not 500k.  That was a typo, sorry about that.

    Your comments are interesting.  You state that you have not heard a good case for supporting mutable DTO’s.  This sounds like a thinly veiled attempt to proclaim that you don’t believe there is a good case; or a case that any system could benefit from mutable DTO’s.  Yet such cases exist, they just don’t satisfy your requirements for acceptability.

    Microsoft solicited feedback and I’m just providing mine.  Our system is constructed with the notion of mutability in our DTO’s and we do not suffer for it.  We do not have maintainence issues with it, or any heartache.  It works fine.  One could argue that our projects were constructed even before Design Patterns were understood (by the team), but even if it were contructed with absolute guidance from the Patterns, the DTO pattern does not require mutability one way or the other and we could have ended up with a solution that used mutability.  So why should Microsoft force upon us an idealogy into a Design Pattern/Methodology that doesn’t require the idealogy?  I see it as no problem to support both paradigms to satisfy those who both need it or don’t need it.

    The projects I work on have been in place years longer than I’ve been with the company and they will not change to support mutability without major overhauls, given the way we manipulate data through the various tiers and stages of operation.  Our DTO’s must carry their full state everywhere they go, since we have no other shared state.  As they pass along, their state changes.  I would not like to be forced to apply every change against a copy of the DTO when the "old" copy will never again be used or referenced by anything else, under any circumstance,  I realize that atomic data types are immutable in .NET, but why should the encapsulating object also be?

    Regardless, the Axum project looks to be a very efficient way to describe objects compared to what we do today and I’d like to benefit from that in the future.  If Microsoft wants widespread adoption of this toolkit, then they should might do well to not force a design idealogy based on a design pattern that does not mandate a particular level of isolation or behavior.  This project will not have my support or endorsement if it won’t help to solve any of my problems or will require me to scrap what we have and change 9 years of development.  I suspect I’m not alone on this.

    I’m not arguing that I believe in absolute mutability.  I believe there is a place for both.  So if feasible, Microsoft should support both models.

    Let’s you and I agree to dissagree, but let Microsoft understand that both sides of the camp would like to benefit from this toolkit in the future.

  11. Michael Hart says:

    Shawn, I appreciate that it works for you to have mutable DTOs. However, I think it goes against the intent of the pattern from a business perspective.

    Now, before you jump in and say that the pattern doesn’t give guidance either way, well that’s totally true, but nor does it give guidance on a whole range of other design issues, like for example whether you could be making DB calls from your DTO – however, you and I both know that such a strategy would be against the intent of the pattern. So I’d prefer to keep the conversation around why you’d want to have mutable state from a business perspective, rather than a "the pattern doesn’t say it can’t be mutable" perspective.

    The main issue I have with allowing state changes on a DTO is that those state changes begin to leak domain logic and encourage developers to begin using DTOs as domain objects, which I’m sure we’d both agree definitely goes against the intent of the pattern.

    Now we both only have our own experiences to go off, so for me I’ve seen examples where developers were modifying fields in DTOs that completely bypassed the validation scheme because the validation was in the domain objects and in the DTO builder (which is where it should be). These DTOs were then forwarded to another service that began to break in hard-to-track down ways.

    By encouraging developers to think of DTOs as just a data contract between service endpoints, which is what they are intended for, rather than objects that you can perform state changes on, you get a big win and your codebase stays clean (on top of the shareability, etc that’s been discussed before).

    If you’ve got some good guidance about how to separate state changes that have business meaning as opposed to whatever state changes you had in mind for your DTOs (and ensure that DTO changes are somehow valid), then that would be good to hear about too.

  12. Garry Trinder says:

    The degree of passion and interest in this topic has been fascinating to watch here on the sidelines. I want to thank you all for being so clear and giving us both sides of the argument!

    This kind of feedback is exactly what can take Axum from an incubation project to a solid product, so please stay involved in the blog and download the bits when we make them available (hopefully shortly).

  13. jchase says:

    Instead of "schema" how about "model"?

  14. Shawn B. says:

    I will need more time to think about a proper response (if one is warranted).  Certainly neither of us is going to change our minds very easily.  But I see where you are coming from.  In light of that, here’s my immediate thoughts.

    Our DTO’s are nothing but members or properties.  Our DTO manipulation is strictly delegated to a business object abstraction or Builder(s) for construction.  From step to step in a process the DTO is subject to change before something will later use it.

    Our validations and integrity checks, while relegated to the business object abstraction, more recently is taking the form of Extension methods.  So we can serialize the DTO and pass it along the wire but if manipulation is necessary, can import the Extension namespace.  Good or bad, who knows.  It is what we do and works very well in our scenarios.

    Reagardless, our DTO’s don’t frequently if ever get into an invalid state so in our scenario immutability wouldn’t help much but we do change the state of our DTO’s frequently enough that immutability wouldn’t help us.  We are not feeling the pains of any problems that immutability would solve.

    I have no strong opinion whether DTO’s should always be mutable or immutable.  Certainly cases could be made either way and I’m subjected to one such case daily.  I’d hate to bastardize the idea as well but I don’t think that a firm stance makes sense in this case.  Due to my defense of mutability vs. your defense of immutability, I’d say I do have a strong opinion about others forcing their single idealogy upon me (or Microsoft, others) in an area that has room enough for both ideals.

    I’d like the tool to support the design, not the design to bend over backwards for the tool.  If the tool will force an idealogy upon me that is contrary to what the project design requires, then it is clearly the wrong tool for the job.  The minute I have to redesign everything to suit the preference of the tool, then it is the wrong tool for the job.  The minute I have to rethink everything I believe just so I can use a tool to help me out, then the tool is the wrong tool for the job unless I was plain wrong to begin with and I don’t think there is right or wrong in the issue of mutability for DTO’s.

    Certainly I’d hate to see widespread abuse of mutability because a tool makes it easier for people who don’t understand to make such mistakes.  But then I can say the same about C# lambdas, or extensions (wink-wink), or expression trees and so on.  Every thing can be abused and I don’t want the tool to save me from myself, especially when I know best what my design calls for.  It is my job as an architect to verify that the design is sound to begin with (or live with one that predates my tenure).  If my DTO’s will be shared between threads or rogue functions then immutability might be a good thing.  But this is not the case for my current projects.

    I wouldn’t even debate this on company time usually.  But I believe this project can help us tremendously in the future and I’d like my opinion to be considered while there is still ample time to shape the future of the project.

    Whether I make a strong argument or not about the case for mutability, no argument has yet convinced me of the case for absolute immutability.

    Though I agree about the point of data integrity and validation.  I suppose it just depends on the scenario in which we both are submerged.  We just have different needs/requirements.

    Sorry for the long write.

  15. Michael Hart says:

    Thanks for the response Shawn – and as Niklas said, I’ve found it a very interesting discussion too.

    Some similar issues crop up here:


    I obviously agree with Greg on this one, whereas I guess you’d tend to agree with Jimmy Bogard (in the comments).

    In any case, I’m also looking forward to how Axum shapes up!

  16. Shawn B. says:

    I thought about this a bit last night.  I am better able to articulate what I’m communicating.

    Even Fowler agrees that a DTO can be built up over a certain length of processing and not always in the constructor alone.  That alone makes the DTO mutable.  The Wikipedia define, however brief, states in its definition "…a DTO does not have any behanvior except for storage and retrieval of its own data (accessors and mutators)".  Not to get hung up on a bastardized definition, but notice the work mutators (setters).

    DTO’s are meant to be the container of data that is serialized and sent via some communication medium to a receiver who will then deserialize.  I’ll grant to you that that operation alone is usually immutable in the sense of .NET Remoting, if the DTO isn’t MarshalByRef it is a copy and not a reference to the original instance of the sender.  Our DTO’s are not MBR or multiple-referenced by numerious threads.

    The original post to which we discuss actually describes the schema it generates to be intended to be used as a message and generally speaking messages are supposed to be immutable.  If a message is received and modified and sent again, generally the message has a different identity but equality can be the same (in terms of correlation or other).  In my case, however, that is not the case.  At all points in the lifecycle, however different, our "message" is the same in identity (per our purposes and design), whether or not equal.  For example, an order may change status or destinition address or amount, but it is still the same order we are tracking throughout the lifecycle.

    I still grant that an act of persistence and reconstruction is always an act of cloning.  But that alone doesn’t prove a point of immutability.  Every operation we perform does not result in a serialization/deserialization.

    So the way we use our objects here are multi-purpose.  We define an object, and when we instance it we begin to build it up or assemble it over a portion of the lifecycle of an action.  During this time, its contents change, usually augment but sometimes mutate.  Mutate in the sense that the status can change, monetary amounts can change, etc. but the object is still for all intents and purposes, the same as the original and not a clone.

    Eventually the object will be serialized and used as a message to some receiver (either via message queue or some form of RPC (remoting, webservice, etc.).  After which, the receiver will reconstruct the object and further mutate it.  That mutation will be the original reconstructed instance and not a clone.

    In terms of Axum, the schema, in my case, would behave more like an entity until serialized and transported at which point it takes on the behavior of a DTO and then reconstructed and further treated like an entity.  We do not have a seperate conceptualization for the entity within the same business domain/process, the transport reprentation of it (which in our case would be exactly identicle in definition), and the reconstructed instance once received on the other end.

    In this sense, unless I misunderstand something about Axum, the same instance generated from a schema via Axum would take on both roles at various points of the object’s lifecycle.  If I create a schema only for the purpose of an immutable message, then I would have to define a seperate but exactly identicle instance in C# the way I do today to retain mutability since my object mutates during lifecycle (except during transport) and should not clone for each mutation.  In that scenario, Axum has not simplified my life or eased maintanance on the project.

    So I see you were talking about the transport definition and I was talking about the larger picture.  It would be pointless to use Axum only to define the message itself if I will be using it also as an entity at all other stages of the lifecycle, mutating it in the meantime.  I do not want to operate on clones everytime something else further along must mutate the object.  They are purposed exactly for the purpose of messaging, but it does not make sense to reconceptualize it for non DTO operations, or to mutate clones everytime a change is needed.  Memory and clock cycles matter in our environment and time spend cloning could be time spent being mor productive elsewhere.  We do not suffer from mutability issues.

    In a perfect design pattern utopian world, I would have endless builders, models, assemblers, flyweights and prototypes, value objects and DTO’s all doing exactly their part in their perfect definition.  But in reality, The project is created by people who some understand or don’t understand design patterns, may not care, implemented incorrectly or are lazy.  We define our entities (what I previously generalized as DTO’s) and use them throughout the entire lifecycle, including the point where they are used as a DTO.  So are they then really a DTO?  Well, they are used as one at some point.

    In all this, I still do not believe a message should be immutable.  As I build the message, and then use it, the usage may cause a mutation and I may not want it to be a clone.  In terms of identity and equality, there are cases where the mutation is a seperate identity (in which case a mutated clone) but equal, or may be identicle but not equal.  It depends on how I define the identity or equality for that scenario.  Whatever scenario, it will be consistent within the domain.

    The DTO does not cease to be a message after it is received.  It can be a message again in the future.  Therefore, the schema generated should not be absolutely immutable unless the designer of the schema wishes it to be so.  In the article "Data on the Outside vs. Data on the Inside"[1], outside messages are immutable.  But our generated "schema" would be used both on the inside and outide is the point I’m trying to make in this long winded rebuttal.  "Outside" is the same is "Inside" in our scenario.

    [1] http://msdn.microsoft.com/en-us/library/ms954587.aspx

    Thanks for the discussion.

  17. Shawn B. says:

    One would have to ask, even if the design of an existing project is flawed, does Microsoft only expect Axum to be used in new projects (so they can be designed to work with Axum’s way of doing things), or also to enhance existing projects?  Axum cannot provide much enhancement if the existing project must be completely rethought and reimplemented.

    So the issue of right or wrong, being extremely subjective to the architect of the project, can be also be defined in terms of what Axum will allow you to do.  If its scope is narrow, then it will only solve a narrow set of problems and will be the wrong tool for the job for many existing projects that don’t fit the definition.

    I am not a proponent of doing it wrong.  But I don’t think mutability is a black and white issue.  If Axum will only ever be intended to define message only during their transport stage, then perhaps it will not be useful to enhance our projects because our schema "instances" are also used as entities when not in transport, during construction and after transport.

  18. Shawn B. says:

    Will an Axum schema instance be the message itself or just a message payload?

  19. Garry Trinder says:

    To answer the last question — they are currently intended as just message payload. One of the reasons for asking about immutability is whether that perspective should change.

    Currently, the instances are messages only while they are in their serialized state, logically of physically, and regular instances in between. The rules are enforced when that boundary is crossed.

    Regarding another question asked here — Axum is definitely meant to be used with existing code. However, it is also meant to rein in shared state anarchy, which means that not all existing code will benefit from it. Since its purpose is to impose a certain pattern (isolated islands of single-threaded execution) on the code, applications that can’t be tamed obviously won’t benefit.

  20. Garry Trinder says:

    Regarding tobi’s comment:

    While it’s true that we don’t need schema instances to be immutable since they are always cloned, one of the reasons for making them immutable is so we don’t have to always clone them.

    Immutability doesn’t have to be "hard core" in the sense of no changes after the constructor is called, it can also be that it is mutable until the point when it is sent via a channel port, at which time all instances in the object graph are locked (this will have minimal impact on the implementation of property setters).

    Coercing the schema instances to a C# object on the receiving side (compiler-generated code, but not as fast as Clone) means pay-for-play: copying happens only when you really need it.

    So, I really don’t know what the right trade-offs are here. If immutability is optional, then it is a question of where to enforce that immutability — is it on the type? The need (or lack of need) to mutate is usually not a type-specific property, it’s a component property.

    If coercion can be made as fast as Clone, the only downside to immutable schemas is the need to write each payload type that needs mutation twice (once as a schema, once as a C# class). It keeps the type system clean with Helland’s inside / outside distinction being clear in the code.

    However, I don’t know how to do fast coercion without modifying C#, so it’s a moot point.

    Again, this discussion has been really insightful and very, very, useful to what we are trying to accomplish.

    Thank you!

  21. Garry Trinder says:

    Let me ask another question:

    What if schema instances were mutable, but were "true" transfer objects in the sense that once you sent an instance, it would no longer be accessible within the domain of the sender?

    Object ownership transfer, in other words…

  22. Shawn B. says:

    I can live with immutability being enforced at the point where the message is transported.  On the receiving side, I receive a clone anyway.  However, we still modify the payload after reception.  At that point, you’re saying it is read-only.

    I can live with that, though not ideal, if I can very easily create a new "unlocked" clone based on it (ideally the message will give me an unlocked clone without using reflection internally).  Then I can continue to modify the new object as I see fit until it becomes a message again.  This scenario isn’t ideal, but it is better than a pure read-only after contructor approach.

    Performance won’t suffer terribly.  And everyone gets what they want 🙂    Though preferably, I’d like to be able to unlock the locked or specify it not be locked at all.  But from your comments, it will be locked and I can live with that as long as I can get an unlocked clone without having to build it myself or use reflection (which is too darn slow for our environment) given our volume of workload.

  23. Shawn B. says:


    Our scenarios differ.  I see exactly what you are getting at.

    We will pass our object into some type of RPC via load balancer (hence our stateless nature) and if we require a mutated object, it comes back as a result and so will be a seperate assignment on a clone anyway.  But that "clone" should still be read-only.  The original at this point is "forgotten".

    All other times the call is one-way and we’ll never need the object again from the sending side.

    Both scenarios are equally as common for us.

    The second is the one I had in mind during my lengthy discussions.  It forms our backend (where I spend my career) or various integration services and object ownership transfer, as you put it, is the norm.

    The customer facing projects however, use the first scenario.

  24. Gerke Geurts says:

    Regarding name choice: I’ve used the term ‘Spec’ (short for Specification) so far when writing (meta)types that define structure and rules to be implemented by/adhered to by other types.

  25. Jon says:

    Making schemata immutable and generating Equals() and GetHashCode() at compile-time both strike me as desirable features for a purpose built language like Axum. I am more than willing to deal with the inconveniences of immutability for the guarantees that come with it.

  26. The subject of immutability sparks intense interest among the people who follow our blog, as is evident

  27. Design Patterns as External DSLs

  28. What about "datatype" or simply "data" (as replacement for "schema")?