Identity in Daml

In this post we will show that Daml’s identity model is able to provide the same degree of security as systems built on a user experience that focuses almost exclusively on cryptographic identity while also providing a superior user experience and better flexibility in terms of deployment options:

  • Decentralized – Entities can run ledger nodes, which are equivalent to miners in blockchains, and provide full cryptographic security, verifiability, and uninterrupted network access.
  • Network aaS – Entities can run participant nodes, which are equivalent to full nodes in blockchains. They get the same cryptographic security and verifiability, but rely on a third-party ledger node for access to the network.
  • Ledger API aaS – Entities can consume the Ledger API as a service, which at best provides verifiability of the ledger and non-repudiation for the API host. This is loosely similar to running a Light node in blockchain.
  • Solution aaS – Entities can consume Daml applications as a service, which is the same as consuming any other hosted application, or accessing blockchains via a web wallet.
Illustration of Daml deployment topologies and trust boundaries.

Each of these options provides its own tradeoff between operational complexity and security, and different models can be mixed and matched within the same network. Before diving deeper into these, and exploring what participant nodes are, or how security is guaranteed in Daml, we need to take a step back and ground this discussion by revisiting identity on the types of systems Daml targets: Blockchains, Distributed Ledgers, and databases (with or without inbuilt cryptographic signing and verifiability)

Identity on Ledgers

We will call these “ledgers” collectively regardless of whether they are in any way distributed or not.

All these systems function fundamentally the same: 

  • An identity is a public key (or a hash of one).
  • Any write operation – usually called a transaction –  is accompanied by cryptographic signatures, signed with the private keys corresponding to public keys.
  • There are some rules, which govern which signatures are needed on a transaction to be valid. These rules can be access rules in the (cryptographic) database or smart contract on a blockchain, for example.

The entities running the ledger – let’s call them ledger operators – and those that have an identity on the ledger – let’s call them users – are usually not the same. Typically there are a lot more users than ledger operators, and nobody trusts anybody. Users don’t trust each other. Users don’t trust ledger operators. Ledger operators don’t trust users. And in many cases, ledger operators don’t trust each other. 

Trust relationships in Blockchain systems.

The identity scheme serves three main purposes:

  • Authorization – Anyone – both ledger operators and users – can establish the identities of the requesters of a transaction and check whether they are authorized to write that transaction.
  • Non-repudiation – Once a transaction is signed and submitted, anyone – both ledger operators and users – can prove that the signing user did indeed sign that transaction.
  • Verifiability – Users can verify that exactly the transaction that they requested and signed have been recorded. 

Put differently, the ledger operators can reject invalid transactions, and prove that valid transactions are indeed valid, and users can verify the ledger and detect any attempted cheating by ledger operators.

The cryptographic identity scheme described here achieves those goals, but also has major downsides:

  • Identity is a bearer instrument – Identity in these systems behaves quite differently to identity anywhere else in life. Whoever holds a private key owns that identity and all rights and assets associated with it. That’s like saying “if you lose the key to your house, you lose the title to it”. To quantify this problem: There are estimates that 20% of all Bitcoin in existence has been lost.
  • Users need to understand transactions – To sign a transaction, you must first compute it. If we imagine that the users are interacting with the app trustlessly via their web-browser, that would require running the entire application in their browser to compute and sign transactions client-side. Light Node models first compute transactions server-side, on a leger node, then send it to the client for signing. Unless the user does careful due diligence on what they are signing, this requires the user to trust the operator of that light node, defeating the trust model the system is trying to solve for.
  • Cryptography becomes a central concern of applications – Cryptographic identity, signing, and signature verification tend to leak deep into the tools used to develop apps for such platforms, leading to more complex and error prone code. Mistakes of this kind are real and consequential. One of the co-creators of Solidity froze over 20% of all Ethereum in existence with a faulty application. (source)

Participants and Parties to the Rescue

Daml aims to remove all of these downsides. Users should not have the burden of having to handle and safeguard cryptographic identity and users should be able to interact with the system like they do with any other application on the web. This requires an intermediary between users and ledger, which we call a participant node.

The participant node takes the place of a user on the ledger, holds a cryptographic identity on the ledger, and computes and signs transactions. Users interact with the ledger via the Daml Ledger API on trusted participants. Note that Users can choose to host their own exclusive participant, in which case the user can guarantee the validity of signatures and transactions for all its parties.

Trust Relationships in Daml Systems.

The Authorization and Non-repudiation properties described in “Identity on Ledgers” are preserved, but with ledger operators in place of users.

Parties as User (and Role) Identities

The introduction of participants changes nothing about the fact that it is users that act on the system. Users need to have an identity on the ledger with which they can authorize (or sign) transactions. This identity needs to be traceable to users and users want to be safe from impersonation. Lastly, whatever identity we choose should be an opaque identifier in the Daml language so that cryptographic concerns are abstracted away for the developer.

Daml Parties provide exactly this kind of identity. Every Daml ledger maintains a many-to-many mapping between Parties and Participants. How this mapping is maintained differs slightly from ledger to ledger, but it’s best imagined as an implicit contract between the participants. The Canton documentation on Identity Management provides a good example of how this works.

When a new user wants to join the system, they need to do so via a participant of their choosing (or run their own participant). The participant submits a signed transaction that creates a new Party mapped to the participant. If successful, this mapping is now known to all participants. The new Party is said to be hosted by the participant. Further transactions may also add mappings between other participants and that party, allowing the party to be hosted on multiple participants in read-only or full capacity.

Assuming the relationship between participant and user is sufficiently trusting, this model still solves for authorization, non-repudiation, and verifiability.

  • Authorization – Since the mapping form parties to participants is public knowledge for ledger and participant operators, and Daml commits and transactions contain their requester and required authorizer parties, any participant or ledger node can determine whether a transaction signed by a participant node is well-authorized or not. So the only open question is whether the submitting participant correctly authenticated the external user. We assume that they did since it’s a trusted relationship, and Daml uses best-practice JWT-based authorization on the Ledger API.
  • Non-Repudiation – Similarly, if a user sees a transaction on the ledger, they can trace it back to the submitting user:
    1. Their participant can trace it back to the submitting participant via the cryptographic signature(s) on the transaction
    2. The submitting participant can trace it back to a user by looking into their logs and user/party mappings.
  • Verifiability – Participants are trusted by their users to verify that all transactions were recorded as requested.

Looking at this picture for the Sandbox or the current Daml Driver for PostgreSQL, we can see why party identity seems so loose. The Ledger Nodes and Participants all collapse into a single trusted centralized ledger server, which manages party identities. Party identities are conceptually equivalent to database users in that set-up, which is why they are just plain strings. Cryptographic signing is omitted altogether because it would add no additional value. Note that the Daml Driver for PostgreSQL 2.0 will separate the Ledger and Participant nodes, which means only the Ledger Nodes collapse into a single node, Party identity becomes something more profound again, and participant nodes will cryptographically sign transactions. 

While it would be possible for the participant to use separate keypairs per party, this does not improve any trust assumptions. The participant is in control of all keypairs so it can swap them out as it wants to. Therefore, to make that trust assumption explicit, we only use a single keypair per participant.

Signatures and Signatories in Daml

Daml has a notion of a Party being a `signatory` on a contract, and the Daml documentation does talk about parties “signing” contracts. These notions of signing should not be confused with cryptographic signing of transactions. They should rather be taken as descriptive of the relationship between the user or entity that controls the Party and the data on ledger.

The Daml Ledger Model and the identity scheme presented here guarantee two things:

  • Explicit Consent – A Party can never end up as the signatory of a contract without explicitly consenting to a transaction which allows this to happen. “Explicitly consenting” means that a participant that hosts the party in question submits a (cryptographically signed) transaction which in some way (pre-)authorizes the creation of the contract in question.
  • Traceability – Given any party signature on the Daml Ledger, it is possible to reconstruct the exact chain of events that led to that signature. Each link in that chain can be attributed to some party giving explicit consent to a transaction in the above sense.

For every party signature in the Daml ledger, there is indeed a trail of underlying cryptographic signatures. Indeed, the link between parties, ledger model, and participant identities is such that a party signature in Daml gives equivalent guarantees to a cryptographic signature in a blockchain, say. The link between party signature and external user is merely less direct. Ease of use for the end-user of a Daml system is prioritized over ease of use in case of disputes that require tracing and auditing.

Cryptographic Security for all Deployment Scenarios

As highlighted in the introduction, Daml’s aims to provide security under clear assumptions in a large variety of deployment scenarios. The visuals above have already highlighted the three types of trust boundaries in Daml systems:

  1. Between ledger operators, corresponding to a decentralized deployment
  2. Between ledger and participant operator corresponding to transacting via someone else’s ledger – Network aaS
  3. Between user and participant operator, corresponding to providing the Ledger API as a service with low trust between user and service provider – Ledger API aaS or Solution aaS.

From this and the discussion of participants and parties, it should already be clear that the lower down in the stack the trust boundary lies, the better the guarantees Daml provides. Thanks to a feedback from Daml’s user community, we have come to realize that there is demand for features that also allow for lower-trust relationships between users and participants, in the Ledger API aaS model.

Reducing trust between User and Participant

This deployment model is going to be supported through two new features currently in the works:

User Non-Repudiation

The current assumption that users and participant operators trust each other in the presence of an IAM system and JWT-based authentication may be too strong for some deployment scenarios. A participant operator can impersonate users by acting as parties hosted on that participant. That opens the participant up to repudiation. What do they do if a user claims they didn’t submit a command that resulted in a transaction?

The traditional tools for this problem are to collect an audit trail consisting of network logs which ultimately lead back to a client. But this is complex and error prone.

To allow participants to provide their services more safely, a user non-repudiation feature is easy to imagine. The participant operator would require that every command to the Ledger API is accompanied by a cryptographic signature which they store as evidence. This requires users to handle private keys again, but since these private keys do not represent their identity on a distributed system, but merely the access key to the Ledger API, their loss is less costly. They can be rotated easily through a safe off-ledger process.

In short, user non-repudiation weakens the amount of trust a participant operator needs to give their users.

User Verifiability

Users can verify that the Daml ledger they see through the Ledger API complies to the Daml Ledger Model and that the transactions they submitted are recorded therein.

However, they don’t have sufficient access to the underlying ledger to verify that their view has actually been recorded – ie it’s theoretically possible for a participant operator to show their users completely false information. One way to safeguard against this would be to cross-check information between multiple participant nodes, but that’s a pretty poor UX. So an important improvement to the Ledger API is to provide enough cryptographic evidence to users that they can verify that their view does actually correspond to what has been recorded on the ledger.

In short, user verifiability weakens the amount of trust a user needs to give their participant operator. A Participant operator will still be able to impersonate a user, or block their access, but with this feature, a user can immediately detect that they did so.

While both these features allow for a user to improve their trust relationship with their participant operator, there is no substitute for users operating their own participant nodes if they care deeply about the underlying accuracy of the ledger and the transactions that they authorize.

Conclusion

Daml’s identity model provides a high-level of flexibility to choose suitable trade-offs between security and user experience. Different approaches can be mixed and matched within the same ledger as needed, provided the underlying ledger chosen supports multi-node ledgers and separate participants.

As long as the system is run in a sufficiently open fashion, each entity within a Daml network can choose for themselves, whether they want to fully participate in consensus and redundancy, whether they want to merely participate in the cryptographic protocols, or whether they want to trust an operator to a greater or lesser extent to provide access as a service.

What is common to all these models is that they provide clear guarantees to all involved parties under the given trust assumptions. And as long as the given trust is not violated, every transaction can be guaranteed to have been authorized correctly and can be traced back to the user that requested it.

For more information on identity management in Daml, refer to the Canton Documentation as an example. You can download Canton to try this out in action here:

Download Canton

E2E Property Rentals Written in DAML

E2E Property Rentals Written in Daml

Improving renting

Renting a property is still quite a manual and cumbersome process because different actors have different views of it and can’t fully trust each other, so paperwork is put in place to mitigate risks.

If we were able to solve these paperwork issues, we could apply the same approach to not only rentals but also many other multi-party processes.

Defining the shared rental process

Suppose we’ve been hired by rental market players to improve their business.

The first thing we’d do is to sit together and to define a shared process and, after several iterations, we might agree on the following one:

Business actors

  1. Authority: registers properties to their landlords and grants licenses to rental agencies.
  2. Landlord: owns a property and can delegate its rental to an agency.
  3. Agency: governs the whole process on behalf of the landlord.
  4. Tenant: the current tenant of a property, runs property visits.
  5. Renter(s): prospective tenants that can be invited to visits by the agency and can apply.

Successful rental

A successful process runs as follows, with stages represented as Daml contracts:

The Daml Rental process
  1. The authority registers a Property to a landlord and offers an AgencyLicenseOffer to a rental agency.
  2. The agency accepts the AgencyLicense.
  3. The landlord turns the Property into a Rental and delegates the rental to an agency through RentalDelegateOffer. From now on, the agency manages the property’s rental.
  4. The agency accepts and updates the Rental, inviting one or more prospective tenants for a visit through VisitOffer.
  5. A prospective tenant creates a VisitScheduleRequest.
  6. The current tenant schedules and runs the visit(s), whose completion is audited by a Visited stage.
  7. The prospective renter(s) may create a RentalApplication.
  8. The agency may accept at most one application and emits a RentalContractOffer while locking the Rental by transforming it into a RentalPendingContract, preventing other applications from being accepted.
  9. The selected applicant may accept it and RentalPendingContract becomes a RentalContract.

Unsuccessful rental

Of course any stage of the process is allowed to fail: the agency license can be withdrawn and any offer can be rejected; the actors, though, should still be free to engage in new agreements.

Renting in Daml

Translating a process into Daml is fairly straightforward, as Daml is built around the concepts of multi-party agreements, but it also means defining it more precisely and dealing with inconsistencies immediately. For this reason, it might be actually a good idea to use Daml as a design aid from the start.

In the following sections we’re going to analyze Rental.daml which can be obtained by cloning this. Alternatively you can check out a short video demo:

Short video demo of “E2E Property Rentals Written in Daml”

Modelling patterns

Some patterns are commonly used in Daml processes:

  • The state of the active processes are stored as contracts and the blueprints for contracts are templates. Templates can have parameters, signatories and observers (i.e. parties that resp. endorse or see contracts), as well as lookup keys.
  • Templates also define the process transitions as choices and restrict the parties that can perform them. Since choices can define many actions that can also be choices, exercising a choice can trigger a rather big set of actions.
  • Choices execute atomically, i.e. either all actions will be applied in the order specified or none of them will.

Let’s have a look at the AgencyLicenseOffer template that represents the initial state of a rental process for an agency:

This template employs Daml offer-accept pattern for rights delegation:

  1. A party submits a create AgencyLicenseOffer transaction, with herself as the authority and an agency party.
  2. authority is a signatory, i.e. it signs the contract, and also automatically an observer (but not vice-versa).
  3. agency is declared as a controller for two choices, i.e. it can exercise either AgencyLicenseOffer_Accept or AgencyLicenseOffer_Refuse with the endorsement of authority that created the contract.
  4. When agency exercises AgencyLicenseOffer_Accept, the AgencyLicenseOffer contract is archived (this is the default behavior) and a new AgencyLicense replaces it. Alternatively, exercising AgencyLicenseOffer_Refuse will only archive the offer.

A short tour of Rental.daml

Rental.daml follows the previously described process closely, so I’ll concentrate on portions that introduce new concepts.

Initially, when the authority offers a license to an agency, we maintain role separation and, with the following assertion, ensure that they are not the same party:

Similarly, when the authority registers a property to a landlord, specific clauses are in place to avoid conflicts of interest:

Also note that the authority can always revoke a license through an empty consuming choice on the license contract:

All offer-accept steps provide similar empty choices to refuse an offer:

Also note that the property registry ID is a key for the Rental:

This key is used to refer to Rental independently from its contract ID that changes because the Rental gets transformed into a RentalPendingContract when a rental application is accepted:

This mechanism effectively locks the Rental so that there is at most one contract offer. If it is refused, the RentalPendingContract becomes a Rental again:

Finally, non-consuming choices are used for non-exclusive steps like offering a visit, exercising these choices won’t archive the Rental contract:

Scenarios

Daml scenarios are scripted submissions similar to commands issued by a Daml application and are used for testing and documentation.

The repository includes a scenario that tests authorization and the happy path.

Triggers

Triggers simplify off-ledger automation, i.e. automatic submissions in reaction to ledger updates. They are effective for cleanup and bookkeeping tasks as they can be written in Daml (they are currently considered experimental).

In our Daml Rental model, we allow many visits and applications for a single property; this means that, when the property is rented, the pending visits and applications must be rejected.

It is possible to let our model track pending activities and archive them when a rental lease is signed but it is easier to write a cleanup trigger that will react to ledger events and perform the cleanup.

User Interface in Daml

Daml offers a sandbox with an UI that allows utilitarian interaction with the model during development.

A dedicated user interface is expected for a production-ready application though. For this we offer a React-Typescript scaffolding based on create-react-app that provides a starting point to modern, single-page web applications that interact with Daml. You can check it out here.

Since I already had my own React-TypeScript scaffold, I decided to reuse that one:

React-TypeScript scaffold

It currently includes all the tables but only two actions: creating and renting a property.

Invoking the Daml JSON API

Daml Rental invokes the Daml JSON API through Axios, for example here’s the implementation of loadContracts, which gets all contracts belonging to a specific template:

Authentication and Authorization

The Daml platform is agnostic w.r.t. Identity and Access Management (IAM) architectures.

The Ledger API Server offers a gRPC API and supports out-of-the-box requests that carry a JWT token signed by a well-known issuer; the token includes the capabilities of the submitter, i.e. on behalf of which parties it can read and write.

The Daml platform also includes a JSON API that translates between REST and gRPC requests/responses, forwarding the JWT token.

Daml Rental accesses the JSON API through Axios and authenticates by attaching the user token to every request:

Conclusion

This example covers the bulk of a real-world rental process and was written in one week by reusing an empty React-Redux-Typescript scaffold; a similar or better productivity gain can be achieved by using create-daml-app.

The most thought-intensive part was designing the rental process which is where Daml really shined. Daml allowed me to focus exclusively on the business logic and customer interactions in my Daml application. I didn’t have to worry about the JSON API, data persistence, or access management which greatly decreased my mental load.

On the frontend side writing a dedicated UI was mostly busywork and the API integration was extremely smooth.

If you want start building and learning, Daml has also a new learn section where you can begin to code online:

Learn DAML online

Daml Developer Monthly – February 2021

What’s New

Diversity is an important initiative at Digital Asset as we work to increase both our own diversity and that of the sectors we work in. As part of that initiative we are holding a virtual panel discussion where some of our top women leaders will be discussing how they got to where they are, the challenges they’ve faced, tips to accelerate impact in the workforce, and much more. Join us on March 3rd at 10 AM EST / 4 PM CET by signing up at digitalasset.com/daversity-webinar-registration.

The engineering team is open sourcing dev-env, a set of tools which lets you pin specific versions of executables in your project and transparently manage them. Think of it like a Nix-lite.

Innover Digital won a Stevie Award for creating a Daml application which provides “a real-time supply chain visibility platform […] in record 4 weeks to combat shortage of medical supplies in hospitals during COVID-19

As always Richard has been keeping us up to date with the latest and greatest security and privacy news on our forum.

Daml is being used for a hackathon in Morocco, quite cool!

Jobs

Digital Asset is hiring for many positions including Engineering, Client Experience, Business Development, and Sales. If you even has so much of an inkling that a job is for you then make sure to visit digitalasset.com/careers to apply and share with your network.

SGX is looking for a Lead Developer with Daml knowledge to expand their digital bond infrastructure.

Upcoming Events

We’re presenting at Hyperledger India today and will have the video shared on our forum shortly.

Andreas’ talk from POPL 2021 on using Canton to create privacy aware smart contracts running on top of interoperable ledgers is now available on our youtube channel.

Francesco recently presented on Daml’s usecases and benefits at Hyperledger Milan, the video can be found on their YouTube page. Similarly Anthony presented Daml’s tech stack along with a live demonstration of Daml’s unique ability to be written once and deployed anywhere at Hyperledger Sweden’s Tech Study Circle, Hyperledger NYC, and Hyperledger Boston. If you’re into the more technical side check out the Hyperledger Sweden video, if you prefer higher level explanations the NYC and Boston presentations are great choices.

What We’re Reading

KC Tam published a new article this month where he’s diving deep into modeling an asset which is fungible, transferable, and trade-able.

Robert laid out one of Daml’s best new features in his blog post explaining how to leverage multi-party submission to have an easier time using Daml. Multi-party submission effectively solves the issues of needing to provide many users with the same data, allowing for role-based access control, and even improving ledger initialization. Don’t let the name fool you, this is a killer feature.

Not to be outdone György broke down all the different mental models we have around “blockchains” and “smart contracts” in his latest post where he rightfully puts these often confusing terms in quotes.

Community Feature and Bug Reports

We’ve had quite a few improvements to Daml this month thanks to our wonderful community.

First Zoraiz roouted our an issue in our docs that didn’t specify a hard-coded limit of 255 characters for Party names. That is now fixed and documented. Then Khuram gave us a ton of feedback on Daml in general and our documentation. We’re still working through it but Stefano has already started on some simple fixes to the docs. And Alexander rounds out the last of the reports this month with three great improvements to the docs.

Daml Connect 1.10 RC is out!

Highlights

  • Daml Triggers and OAuth Middleware are now Stable
  • Daml-LF 1.11 is now stable without any further changes from the beta version. It includes the following features and changes
    • Choice Observers
    • Generic Maps and DA.Map and DA.Set libraries are
    • Ledger API version is now at 1.9
  • Daml Studio now provides tooltips and go-to-definition even if the code doesn’t currently compile
  • Considerable performance improvements

Impact and Migration

  • This release is purely additive so no action needs to be taken. We recommend testing projects with Daml-LF 1.11 or fixing Daml-LF version at 1.8 (the current default) at this point. Daml-LF 1.11 will become the default with the next release of Daml Connect.

The full release notes and installation instructions for Daml Connect 1.10.0 can be found here.

What’s Next

  • Despite the order of magnitude performance improvements we have already accomplished, this continues to be one of our top priorities. 
  • Improved exception handling in Daml is progressing well and expected to land in one of the next Daml-LF versions.
  • We are continuing work on several features for the Enterprise Edition of Daml Connect:
    • A profiler for Daml, helping developers write highly performant Daml code.
    • A mechanism to verify cryptographic signatures on commands submitted to the Ledger API, improving security in deployment topologies where the Ledger API is provided as a service.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.
  • A new primitive data type in Daml that allows infinite precision arithmetic. This will make it much easier to perform accurate numeric tasks in Daml.

Roles in Daml – Introducing Multi-party submissions

Introduction

What is a Daml party? It’s a great question to which there is a precise technical answer, of course, but that answer would help us as little in designing a multi-party application as a precise technical understanding of database users would help us in designing a web application.

In our examples, we often use party names like “Alice” and “Bob”, or organisations like “Bank”, suggesting that parties represent individuals or business entities. This is an oversimplification for examples. Looking top down, it’s not so much a bank that acts in a transaction, but a legal entity within that bank. And maybe a particular desk within that legal entity. Ultimately paper gets signed by an individual within the responsible team. Similarly, looking at the situation bottom up, an individual rarely acts purely as an individual. We act as members of teams, using authority bestowed by the surrounding organization. In some businesses and cultures this concept of acting on behalf of an organization still has a physical manifestation in the form of a stamp.

In short, agents, be they human or machine, take on roles. The role can be that of an individual, or as a member or a team or organisation. It’s the mantle of the role that we take on that gives us access and authority. And access and authority are exactly what Daml parties are about via observers, signatories and controllers. Thus, a good way of thinking about Parties in Daml is to think of them as roles. Parties Alice and Bob represent individuals acting simply on behalf of themselves. A party Bank represents something or someone acting on behalf of a Bank.

Now in reality, we rarely act in only a single role. Just in your day-to-day professional activities you are probably acting as an individual, a team member, and a member of an organisation all at the same time. Multi-party submissions now make it possible to do just that in Daml, allowing you to translate business roles into Daml applications much more easily, and improving development and performance of several important use cases.

Use case 1: reference data

Let’s assume that we want to create a digital market where parties exchange items at a fixed price given by the market operator. The operator can update the price at any time, and we want the parties to automatically use the current price in their agreements.

The current price could be stored on the ledger in a simple contract where the price for a given item can be looked up by key. 

But how do the market participants look up the current price? The CurrentPrice contract as defined above is only visible to the operator.

We could make every party observer on all CurrentPrice contracts by storing a list of observers in the CurrentPrice contract. But maintaining a list of n parties on m price contracts gets expensive and cumbersome fast. Adding a single new party requires a transaction of size O(m * n). There are some possible optimizations, but ultimately this approach doesn’t match the obvious mental model for read access. We want to express the role of being able to read the prices. In line with the party/role equivalence, that means we add a new party and give it the right to read the prices:

Now with multi-party submissions, all that’s needed is for agents that should be able to read and use that price data to get the authorization to read as “reader”. For example, here’s the payload of an access token for the DAML sandbox that allows Alice to act on behalf of herself, while also reading all contracts of the “reader” party. 

Use case 2: role-based access control

In this section, we will have a look at how to implement role-based access control in Daml. We’ll do so quite generically by showing how multi-party submissions can be used to model groups of parties (think of them as individuals) and give those groups some special access.

First, we define a template that captures the fact that a given party is a member of a group.

where org is the Daml party representing the entire organisation (the root of trust for current group hierarchy), group is the Daml party representing the group membership role, and member is a Daml party representing an individual person. Group membership can be checked through a fetch by key operation:

which will fail if the given party is not a member of the given group. Now let’s give a group special permissions by modeling one group being able to administer another.

Here adminGroup is the Daml party representing the group admin role. Note that the AddMember choice uses flexible controllers and can be exercised by anyone. The assertMember call is used to make sure the party exercising the choice is indeed a member of the adminGroup role. In an organisation where Alice is the administrator of the legal team, she could use the following Daml Script to add Bob to the legal team:

Note how Alice has read-only access to the legalTeamAdmins contracts, which allows her to access the GroupAdmin contract. Similar to the first use case, the read delegation was partially moved to the access token provider – in the above example, all administrators of the legal team would need an access token that allows them to read on behalf of the legalTeamAdmins party (e.g., “readAs”: [“legalTeamAdmins”]). However, administrative actions are still validated in the Daml model, and the ledger remains the single source of truth for who can administer what group, and which individual triggered changes.

Use case 3: Ledger Initialization

A more administrative use-case is that of ledger initialization. For example, imagine you are implementing a new workflow in Daml, where the core of your workflow are simple IOU contracts:

The Iou contract has multiple signatories. Previously, if you wanted to test your template using Daml Script, you had to implement a propose-accept workflow and submit multiple commands in order to create a single Iou contract. With multi-party submissions, you can instead send a single create command with both the issuer and the owner as act_as parties.

Not only is this shorter to write, but you can also start testing your code before fully implementing all propose-accept workflows. 

Summary of API changes

To let you jump right in and try out these new features yourself, here is a brief summary of the API changes that you need to use muli-party submissions.

Previously, submitted commands had a single party field, which was the party on whose behalf the command should be executed – ie the single role the submitting agent was acting in. This field has been deprecated and replaced by the following two fields:

  • actAs: The set of parties on whose behalf the command should be executed. All of these parties jointly authorize the command for the purpose of the ledger model authorization rules.
  • readAs: The set of parties on whose behalf (in addition to all actAs parties) contracts can be retrieved. These parties do not authorize any changes to the ledger. They affect Daml operations such as fetch, fetchByKey, lookupByKey, exercise, and exerciseByKey, which only “see” contracts visible to the submitting parties.

Ledger API

The commands object contains new actAs and readAs fields as described above. The change is backwards compatible, any party specified in the party field is merged into the actAs parties.

DAML Script

There is a new submitMulti command in DAML Script.

Authorization

Note that most production ledgers will secure their ledger API access and users will have to add access tokens to their ledger API requests. If an application wants to submit a multi-party command, it needs an access token which authorizes it to act on behalf of all readAs parties and to read on behalf of all readAs parties.

For the DAML sandbox, the access token payload contains actAs and readAs fields, which need to contain at least all of the parties mentioned in the corresponding command fields.

Daml Developer Monthly – January 2021

What’s New

The results of the third community recognition ceremony are in. Congratulations to Emil and Matt for winning and thank you both for your excellent contributions to our community.

Daml’s default branch name is changing from master to main to have more inclusive naming. Read more about the steps and the reasoning behind it here.

Happy 5 year anniversary to Hyperledger!  We’re proud to be a member and look forward to what the next 5 years will bring!

And we’ve got a new name for the community update, one that we feel better reflects our goals with these posts which is to highlight Daml developers and the ecosystem around them.

Upcoming Events

Andreas and the gang will be at POPL 2021’s Certified Programs and Proofs conference next week (and presenting next Tuesday the 19th). If you’re interested in practical and theoretical topics around formal verification and certification you should definitely check it out.

Anthony will be showing off Daml’s write-once deploy-anywhere ability at the next Hyperledger Tech Study circle this Friday the 15th where he will be deploying the same application to both Fabric 2.2 LTS and Sawtooth.

Anthony will also be presenting on Why Daml at Hyperledger NYC on January 26th at 12PM EST. Join to learn about Daml’s tech stack along with a live demonstration of Daml’s unique ability to be written once and deployed anywhere including Hyperledger Fabric and Sawtooth.

What We’re Reading

György breaks down all the different mental model we have around “blockchains” and “smart contracts” in his latest post where he rightfully puts these terms in quotes because they’ve come to mean a lot of different things.

Richard’s latest security and privacy news covers newly discovered malware involved in the Solarwinds fiasco, why Parler’s deleted data isn’t deleted, and the secret history of ZIPFolders among many other excellent stories.

Richard is also going to be turning these blog posts into podcasts so make sure to check them out when they’re live on the forum!

György shows us how to be happy developers by building Daml frontends in Elm

Olusegun published his MSc FE paper on OTC swaps using Daml.

Community Feature and Bug Reports

As you’ll see below we now have support for multi-party submissions, allowing for better role-based access that many in our community have asked for including Jean, Emil, and Zohar. So thanks everyone for pushing for this important and highly useful feature.

Big thanks to James and Luciano for tracking down a bug in our Bond Issuance refapp!

Other Fun

Congrats to György for becoming a EU Blockchain Observatory and Forum expert panel member

Bernhard’s secret santa project is complete and some of us have already started receiving our gifts.

Daml Connect 1.9

Highlights

  • Multi-party submissions allow new forms of data sharing and use-cases including public reference data and better role-based access controls. 
  • Daml-LF 1.11 is now in Beta, bringing Choice Observers and Generic Maps.
  • Introduction of Ledger API version 1.8.

The full release notes and installation instructions for Daml Connect 1.9.0 can be found here.

Impact and Migration

  • Multi-party submissions add new optional fields to the Ledger API and some interface definitions in Java. If you compile your own gRPC service stubs, or maintain your own implementation of some of the provided Java interfaces, you may need to add the new fields or methods. Details under Multi-Party Submissions.
  • Daml-LF 1.11 will not be compatible with the deprecated Sandbox Classic’s default --contract-id-seeding=no mode. If you use Sandbox Classic, you will need to either switch to a different contract seeding mode, or pin Daml-LF to 1.8. Details under Daml-LF 1.11.
  • Daml’s main development branch has been renamed from master to main to be more inclusive. If you had a source code dependency on the repository you need to switch to the new branch.

What’s Next

  • The Trigger Service is getting ever more robust and is likely the next big feature to come to Daml Connect.
  • Improved exception handling in Daml didn’t make the cut for Daml-LF 1.11, but remains a high priority.
  • We have started work on a new features targeted at the Enterprise Edition of Daml Connect:
    • A profiler for Daml, helping developers write highly performant Daml code.
    • A mechanism to verify cryptographic signatures on commands submitted to the Ledger API, improving security in deployment topologies where the Ledger API is provided as a service.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.

Community Update – December 2020

Our community recognition ceremony is open! Nominate who you think deserves to win! 

What’s New in The Ecosystem

@gyorgybalazsi shared what he, @Gyorgy_Farkas, Janice, and Dani learned from participating in Odyssey and grappling with new problems. From fishing quotas to licenses to matching reports with independent observers it’s thoroughly impressive how much was built in such a short time.

@gyorgybalazsi shared what he, @Gyorgy_Farkas, Janice, and Dani learned from participating in Odyssey and grappling with new problems. From fishing quotas to licenses to matching reports with independent observers it’s thoroughly impressive how much was built in such a short time.

@eric_da is presenting on why open banking and open apis are the tip of the iceberg at Open Core Summit on 12/17 at 4:05 PST/7:05 EST.

IMG_4736
@bartcant got his bobblehead

Odyssey and YHack are a wrap! The winning YHack team wrote a small DAML backend, great to see someone pick up DAML and run with it so quickly. Odyssey rewards were announced here.

Blogs and Posts

@entzik released the 2nd part of “A Doodle in DAML” , with the clearest explanation I’ve ever seen of how a preconsuming choice works. Awesome stuff!

@entzik released the 2nd part of “A Doodle in DAML” , with the clearest explanation I’ve ever seen of how a preconsuming choice works. Awesome stuff!

@anthony talked to Gints at Serokell about why DAML is a different kind of programming language, how it’s rooted in Haskell, and how its unique features make it a great option for writing smart contracts

Videos

@andreolf recently demonstrated how to use DAML to build robust data pipelines in practice. Really great presentation.

@andreolf recently demonstrated how to use DAML to build robust data pipelines in practice. Really great presentation.

Manish, Leve, and Francesco recently gave a presentation on DAML for beginners check out the video recording here.

Corporate News

DAML is now available for Microsoft’s Azure Database, check it out.

Knoldus has added DAML to their Techhub, lots of projects to check out and some even by our very own forum members @Nishchal_Vashisht@upanshu21, and @ksr30!

HKEX, the world’s second-largest exchange group by market capitalization, is now using DAML to standardize and streamline their post-trade workflows.

Demex, a climate risk insurtech is using Sextant for DAML (on Sawtooth) to build financial risk solutions.

VMWare Blockchain 1.0 is released with DAML support right out of the box! Check out the full announcement here.

Other Fun

If you didn’t know Richard has weekly updates on security and privacy news, you can check them out here.

DAML Connect 1.8

Highlights

  • The API coverage of DAML Script, the JavaScript Client Libraries, and the DAML Assistant have been improved.
  • DAML Driver for PostgreSQL Community Edition is now stable.
    • Action required unless you were already using the --implicit-party-allocation=No flag.
    • Running the Sandbox with persistence is now deprecated.

The full release notes and installation instructions for DAML Connect 1.8.0 can be found here.

Impact and Migration

There are no backwards incompatible changes to any stable components.

DAML Driver for PostgreSQL (daml-on-sql) Community Edition ledger has been downloadable as Early Access from GitHub releases since SDK 1.4.0. The option --implicit-party-allocation used to be on by default, but has now been removed. Users that were using the DAML Driver for PostgreSQL with the implicit party allocation option will need to explicitly allocate parties now.

Users running DAML Triggers (Early Access) against authenticated ledgers may now need to handle authentication errors externally.

What’s Next

The eagle eyed reader may have noticed that some features have appeared in the “What’s Next” section for some time, and that there hasn’t been a DAML-LF release since SDK 1.0.0 and DAML-LF 1.8. This will change with one of the next releases because several features that require a new DAML-LF version are currently being finalized:

  • Choice observers (see Early Access section above)
  • Generic Maps
  • Better Exception Handling in DAML

Work also continues to move DAML Triggers and the Trigger Service (see Early Access section above) to general availability.

Lastly, the multi-party read features on the gRPC Ledger and JSON APIs will be supplemented with multi-party writes, allowing the submission of commands involving multiple parties via the APIs as long as they are all hosted on the same node.

Secure DAML Infrastructure – Part 2 – JWT, JWKS and Auth0

In Part 1 of this blog, we described how to set up a PKI infrastructure and configure the DAML Ledger Server to use secure TLS connections and mutual authentication. This protects data in transit and only authorised clients can connect. 

An application will need to issue DAML commands over the secure connection and retrieve the subset of contract data that it is authorised to see. To enable this, the Ledger Server uses HTTP security headers (specifically “Authorization” Bearer tokens) to receive an authorization token from the application that describes what it is authorised to do. 

The user or application is expected to authenticate against an Identity Provider and in return receive an authorization token for the Ledger. This is presented on every API call.

What are JWT & JWKS?

Java Web Tokens (JWT) is an industry standard way to transmit data between two parties. Full details on JWT can be found at the JWT: Introduction and the associated JWT Handbook. Here we will provide a summary of the specification and how the Ledger Server uses custom claims to define allowed actions of an application.

JWTs are a JSON formatted structure with a HEADER, a PAYLOAD and a SIGNATURE. The Header defines the algorithm used to process the payload, in particular the algorithm used to sign (or encrypt) the payload. The payload contains the details of the authorization given to the application and the signature is over the structure to ensure it has not been tampered in transit. Each section is then base64 encoded with a dot separator between the sections. This is placed in the Authorization HTTP header to pass as part of each HTTP request.

An end-user or application will obtain the token by first authenticating to an Identity Provider and being issued an access token. oAuth protocol defines several means to achieve this including web users (3-way handshake that also ask for consent from human end-user) and applications (2-step Client Credentials Flow, which uses a client_id and client_secret for machine accounts). The Identity Provider will validate the provided credentials and issue a signed token for the request service or API.

So how does Ledger Server get the public key of the signer so it can validate the signature and trust the token. This is where JSONWeb Key Sets (JWKS) comes in. Each Identity Provider publishes a well known URL and we configure the Ledger Server to query this to retrieve the JKWS structure. This contains the public key of the signer and some additional metadata.

In the previous blog post you may have noticed a parameter to the Ledger Server as follows:

This tells Ledger Server to trust tokens generated from, in this case, a specific Auth0 tenant and to use the URL to get the Auth0 JWKS. It also enforces a key signing using RSA keys – RSA keys with a SHA-256 hash function.

JWT can also use other algorithms including Elliptic Curve (ES256 – EC P-256 cipher with SHA-256) and shared secret (HS256). We do not recommend using HS256 for anything more than development / testing as it is open to bruteforce attack of the shared password.

In-depth JWT Example

To give some more detail, an authenticated application submits an Ledger API command over HTTPS (GRPC or JSON) and provides a security header

The token string is of the format:

HEADER.PAYLOAD.SIGNATURE

If we separate out the sections of the JWT token you would get

This is the header, payload and signature encoded as base64 with each section separated by a dot. This is normally a single string.

After decoding (using the provided script ./decode-jwt.sh <filename>) or via the JWT Debugger (https://jwt.io/), the JWT becomes the Header and Payload portions as follows.

These represent the header and payload sections. What does this tell us? 

The header shows that the JWT was signed using RS256 and used a specific key (kid value). An identity provider may have multiple signing keys described in the JKWS and this selects which one to use to verify the JWT.

The payload contains many standard attributes (the three letter combinations) and one custom claim (“https://daml.com/ledger-api“). The standard attributes include:

TagDescription
algAlgorithm used for signing, here RS256 [checked by API]
audAudience for token
azpAuthorized Party
expExpiry in Epoch seconds [checked by API]
gtyGrant type
iatIssued at in Epoch seconds
issIssuer
subSubject (i.e account name)

The custom claim (“https://daml.com/ledger-api“) details a variety of capabilities for this application:

TagDescription
adminIs the application allowed to call to administrative API functions (true/false)
actAsAn array of Ledger Party IDs that the application is allowed to submit DAML Commands as
readAsAn array of Ledger Party IDs that the application is allowed to read contracts for
ApplicationIdA unique ID for the application. If set then Ledger Server will validate that the submitted commands also have this AppID set
LedgerIdThe Ledger ID of the Ledger that the application is trying to connect.

The authorizing Identity Management provider is expected to set these to appropriate values for the application that is requesting access.

Full details of the API and exposed service endpoints is available in the DAML Documentation. Details of the API and associated permissions is summarised in the Core Concepts section of the sample repo:

https://github.com/digital-asset/ex-secure-daml-infra/blob/master/Documentation/CoreConcepts.md

Public services are available to any application connecting to a ledger (Mutual TLS may restrict this but a valid token, with minimally not admin and no parties, is still required). Administrative services are expected to be used by specific applications or operational tooling. The remaining Contracts, Command and Transaction services are restricted to the set of parties the application is authorised for.

JWKS (JSON Web Key Sets)

The final piece of the puzzle is JSON Web Key Sets (JWKS) which an identity provider exposes to distribute its public key to allow signature verification. 

An example JWKS format is:

The details for each key, the algorithm being used (RS256), the key type (RSA), use (signatures), the key ID (kid value for which Auth0 uses the key fingerprint x5t), the public key (x5c) and some RSA key parameters (n and e fields. Other fields will be seen for EC keys). JWKS supports the distribution of private keys with additional fields for the private key but this is not used here.

A receiving service (in this case the Ledger Server API) will use this to validate the signed JWT to validate that it was issued by the trusted provider and is unaltered.

Using an example Identity Provider – Auth0

So now we have described JWT and JWKS, how do we use these standards? The following builds on the previous post Easy authentication for your distributed app with DAML and Auth0 that focused on end-user authentication and authorisation. You may want to read this first.

In the reference sample, we provide two options:

  • Authenticating using Auth0 for end-users and service accounts
  • Authenticating services via local JWT provider for CI/CD automation

Auth0

The full detailed steps and scripts are described in the reference documentation. To use Auth0 you will need to do the following:

  1. Create an Auth0 tenant – a free trial tenant is usable for this sample
  2. Create an Auth0 API to represent the Ledger Server API. This is the target for user and services to access Ledger information via the API
    1. Create New API
    2. Provide a name (ex-secure-daml-infra)
    3. Provide an Identifier (https://daml.com/ledger-api)
    4. Select Signing Algorithm of RS256
  1. Create an Auth0 “web application” for end-user authentication and access. This uses a single page application (SPA) with React, to create a sample authenticated page that displays the logged in user details and accesses the current contract set via the API.
    1. Create new Application
    2. Select Single Page Application
    3. Select React
    4. In App Settings:
      1. Set Allowed CallBack URLS, Allowed Logout URLS, Allowed Web Origins:
        1. http://localhost:3000, https://web.acme.com
  1. Configure two Auth0 “rule”s – this is a programmatic way in Auth0 to add custom claims to token generated for user authentication requests

Rule: “Onboard user to ledger”

Rule: “Add Ledger API Claims”

  1. Set up and configure end-users and define login credentials and metadata about their Ledger ID. One of the provided Rules allows metadata to be configured on first access of the user. The DAML Sandbox auto-registers new Parties on first use but production ledgers, particularly on DLT platforms, may require more complex provisioning flows.
    1. Create a New User
    2. Enter Email and your preferred (strong) passphrase
    3. If using local Username / Password database, set connection to Username-Password-Authentication.
    4. In the app_metadata section of the User, add the following template. You will need to adjust per user so that partyIdentifier matches that name of the user in the Ledger, i.e. “Alice”, “Bob”, “George”

User Metadata

  1. Define services (machine-to-machine “applications” in Auth0 terminology) and some associated metadata for each service – which parties they can act or read on behalf of. These are linked to the above API. Each m2m application defines client-Id and client_secret credentials for each service
  1. Configure an Auth0 “hook” – this is a equivalent to the end user case above but configures a way to define custom claims for services

Once this is in place, you can then update the following:

  • Env.sh
    • add the Auth0 tenant details and each of the service account credential pairs. In a production setting you would use some form of credential vault like Hashicorp Vault, AWS or GCP KMS services, etc to store and pass these to the respective services.
  • Update the ./ui/src/auth_config.json to point to the correct Auth0 tenant

The Auth0 environment is now ready for use.

Local JWT Provider

Since depending on third party services is complicated for automated testing environments, we implemented a sample JWT provider that uses code signing certificates issued from the local PKI. 

In particular, you can set an option in the env.sh script to require the environment to use a local signer. In this model the following is used:

  • A Signing Certificate is issued from the Intermediate CA 
  • A small Python program then issues JWT tokens for each service with respective custom claims for access. The are placed in <pwd>/certs/jwt directory
  • A simple Python web server is provided that exposes a local JKWS endpoint with the code signing certificate reformatted to JWKS format. It also acts as a simple authentication provider for the Python boy which uses oAuth 2-step Client Credential flow to obtain a token.

The steps are in the following scripts:

  • ./make-jwt.sh
  • ./run-auth-service.sh

The tokens are issued with an expiry of one day.

Summary and Next Steps

In this post we reviewed the JWT and JWKS Standards to allow an application to request an authorization token from an Identity Provider and submit with Commands to a Ledger. We showed how to use a sample identity provider (in this case Auth0) to allow end-user and service account authentication and get appropriate authorization tokens.

Next step is to run the sample environment and execute some tests against the environment. This is the topic for the final part of this series.If you want to see the first part on “PKI and certificates” please check here:

Read the firt part on PKI and certificates

Zooming in on DAML’s performance

tl;dr If you care about performance, use DAML’s builtin syntax for accessing record fields.

Introduction

I guess it is no secret that I’m not the biggest fan of lenses, particularly not in DAML. I wouldn’t go as far as saying that lenses, prisms, and the other optics make my eyes bleed but there are definitely better ways to handle records in most cases. One case where the builtin syntax of DAML has a clear advantage over lenses is record access, getting the value of a field from a record:

    record.field1.field2

It doesn’t get much clearer or much shorter. No matter what lens library you use, your equivalent code will look something like

    get (lens @"field1" . lens @"field2") record

or maybe

    get (_field1 . _field2) record

if you’re willing to define lenses like _field1 and _field2 for each record field. Either way, the standard DAML way is hard to beat. And if you need to pass a field accessor around as a function, the (.field1) syntax has you covered as well.

Clarity is often in the eye of the beholder and short code is not per se good code. The only thing that seems to matter universally is performance. Well, let’s have a look at the performance of both styles then. If you want to play along with the code in this blog post, you can find a continuous version of it in a GitHub Gist.

Martin: I wouldn’t go as far as saying that lenses, prisms, and the other optics make my eyes bleed but there are definitely better ways to handle records is most cases

van Laarhoven lenses

Before we delve into a series of benchmarks, let’s quickly recap van Laarhoven lenses. The type of a lens for accessing a field of type a in a record of type s is

    type Lens s a = forall f. Functor f => (a -> f a) -> (s -> f s)

If it wasn’t for everything related to the functor f in there, the type would be the rather simple (a -> a) -> (s -> s). This looks like the type of a higher-order function that turns a function for changing the value of the field into a function for changing the whole record. That sounds pretty useful in its own right and is also something that could easily be used to implement a setter for the field: just use \_ -> x as the function for changing the field in order to set its value to x. Alas, it seems totally unclear how we could use such a higher-order function to produce a getter for the field. That’s where f comes into play. But before we go into the details of how to get a getter, let’s implement a few lenses first.

Given a record type

    data Record = Record with field1: Int; ...

a lens for field1 can be defined by

    _field1: Lens Record Int
    _field1 f r = fmap (\x -> r with field1 = x) (f r.field1)

In fact, this is the only way we can define a function of this type without using any “magic constants”. More generally, a lens can be made out of a getter and a setter for a field in a very generic fashion:

    makeLens: (s -> a) -> (s -> a -> s) -> Lens s a
    makeLens getter setter f r = setter r <$> f (getter r)

Again, this is the only way a Lens s a can be obtained from getter and setter without explictly using bottom values, such as undefined.

Using DAML’s HasField class, we can even produce a lens that works for any record field:

    lens: forall x s a. HasField x s a => Lens s a
    lens = makeLens (getField @x) (flip (setField @x))

The lens to access field1 is now written as

    lens @"field1"

Remark. In my opinion, the most fascinating fact about van Laarhoven lenses is that you can compose them using the regular function composition operator (.) and the field names appear in the same order as when you use the buildin syntax for accessing record fields, as indicated by the example in the introduction.

Implementing the getter

That’s all very nice, but how do we actually use a lens to access a field of a record? As mentioned above, that’s where the f comes into play. What we are looking for is a function

    get: Lens s a -> s -> a
    get l r = ???

Recall that the type Lens s a has the shape forall f. Functor f => .... This means the implementation of get can choose an arbitrary functor f and an arbitrary function of type (a -> f a) and pass them as arguments to l. The functor that will solve our problem is the so-called “const functor” defined by

    data Const b a = Const with unConst: b

    instance Functor (Const r) where
        fmap _ (Const x) = Const x

The key property of Const is that fmap does pretty much nothing with it (except for changing its type). We can use this to finally implement get as follows:

    get: Lens s a -> s -> a
    get l r = (l Const r).unConst

With this function, we can get the value of field1 from an arbitrary record r by calling

    get (lens @"field1") r

For the sake of completeness, let’s quickly define a setter as well. As insinuated above, we don’t really need the f for that purpose. That’s exactly what the Identity functor is for:

    data Identity a = Identity with unIdentity: a

    instance Functor Identity where
        fmap f (Identity x) = Identity (f x)

    set: Lens s a -> a -> s -> s
    set l x r = (l (\_ -> Identity x) r).unIdentity

How to micro-benchmark DAML

Micro-benchmarking DAML code is unfortunately still a bit of an art form. We will use the scenario-based benchmarking approach described in a readme in the daml repository. To this end, we need to write a scenario that runs get (lens @"field") r for some record r. The benchmark runner will then tell us how long the scenario runs on average.

In order to write such a scenario, we need to take quite a few things into consideration. Let’s have a look at the code first and explain the details afterward:

    records = map Record [1..100_000]         -- (A)

    benchLens = scenario do
        _ <- pure ()                          -- (B)
        let step acc r =
                acc + get (lens @"field1") r  -- (C)
        let _ = foldl step 0 records          -- (D)
        pure ()

The explanations for the marked lines are as follows:

  • (D) Running a scenario has some constant overhead. Buy running the getter 100,000 times, we make this overhead per individual run of the getter negligible. However, folding over a list has some overhead too, including some overhead for each step of the fold. In order to account for this overhead, we use a technique that could be called “differential benchmarking”: We run a slightly modified version of the benchmark above, where line (C) is replaced by acc + r and line (A) by records = [1..100_00]. The difference between both benchmarks will tell us how long it takes to execute line (C) 100,000 times.
  • (A) In order for the differential benchmarking technique to work, we need to compute the value of records outside of the actual measurements since allocating a list of 100,000 records takes significantly longer than allocating a list of 100,000 numbers. To this end, we move the definition of records to the top-level. The DAML interpreter computes top-level values the first time their value is requested and then caches this value for future requests. The benchmark runner fills these caches by executing the scenario once before measuring.
  • (B) Due to the aforementioned caching of top-level values and some quirks around the semantics of do notation, we need to put our benchmark after at least one <- binding. Otherwise, the result of the benchmark would be cached and we would only measure the time for accessing the cache.
  • (C) We put the code we want to benchmark into a non-trivial context to reflect its expected usage. If we dropped the acc +, then get (lens @"field1") r would be in tail position of the step function and hence not cause a stack allocation. However, in most use cases the get function will be part of a more complex expression and its result will be pushed onto the stack. Thus, it seems fair to benchmark the cost of running the get plus the pushing onto the stack. The additional cost of the addition is removed by the differential benchmarking technique.

First numbers

Recall that the objective of this blog post is to compare lenses to the builtin syntax for accessing record fields in terms of their runtime performance. To this end, we run three benchmarks:

  1. benchLens as defined above,
  2. benchNoop, the variant of benchLens described under (D) above,
  3. benchBuiltin, a variant of benchLens where line (C) is replaced by acc + r.field1.

If T(x) denotes the time it takes to run a benchmark x, then we can compute the time a single get (field @"field1") r takes by

    (T(benchLens) - T(benchNoop)) / 100_000

Similarly, the time a single r.x takes is determined by

    (T(benchBuiltin) - T(benchNoop)) / 100_000

Running these benchmarks on my laptop produced the following numbers:

xT(x)(T(x) - T(benchNoop)) / 100_000
benchNoop11.1 ms
benchLens188.7 ms1776 ns
benchBuiltin15.7 ms46 ns

Benchmarks of the polymorphic lens and the builtin syntax.

Wow! That means a single record field access using the builtin syntax takes 46 ns whereas doing the same with get and lens takes 1776 ns, which is roughly 38 × 46 ns. That is more than 1.5 orders of magnitude slower!

Why are lenses so slow as getters?

This is almost a death sentence for lenses as getters. But where are these huge differences coming from? If we look through the definitions of lens and get, we find that there are quite a few function calls going on and that the two typeclasses Functor and HasField are involved in this as well. Calling more functions is obviously slower. Typeclasses do have a significant runtime overhead in DAML since instances are passed around as dictionary records at runtime and calling a method selects the right field from this dictionary.

If we don’t want to abandon the idea of van Laarhoven lenses, we cannot get rid of the Functor typeclass. But what about HasField? If we want to be able to construct lenses in a way that is polymorphic in the field name, there’s no way around HasField. However, if we were willing to write plenty of boilerplate like the monomorphic _field1 lens above, we could do away with HasField. Benchmarking this approach yields:

xT(x)(T(x) - T(benchNoop)) / 100_000
benchMono92.5 ms814 ns

Benchmark of the monomorphic _field1 lens.

Accessing fields with monomorphic lenses is twice as fast as with their polymorphic counterparts but still more than an order of magnitude slower than using the builtin syntax. This implies that no matter how much better we make the implementation of lens, even if we used compile time specialization for it, we wouln’t get better than a 17x slowdown compared to the builtin syntax.

Temporary stop-gap measures

If a codebase is ubiquitously using lenses as getters, then rewriting it to use the builtin syntax instead will take time. It might make sense to replace some very commonly used lenses with monomorphic implementations. Although, in a codebase defining hundreds of record types, each of them with a few fields, there is most likely no small group of lenses whose monomorphization makes a difference.

Fortunately, there’s one significant win we can achive without changing too much code at all. The current implementation of lens is pretty far away from the implementation of _field1. If we move lens closer to _field1, we arrive at

    fastLens: forall x r a. HasField x r a => Lens r a
    fastLens f r = fmap (\x -> setField @x x r) (f (getField @x r))

Benchmarking this implementation gives us

xT(x)(T(X) - T(benchNoop)) / 100_000
benchFastLens128.9 ms1178 ns

Benchmark of the polymorphic fastLens.

These numbers are still not great, but they are at least a 1.5x speedup compared to the implementation of lens.

Chains of record accesses

So far, our benchmarks were only concerned with accessing one field in one record. A pattern that occurs quite frequently in practice are nested records and chains of record accesses, as in

    r.field1.field2.field3

With the builtin syntax, every record access you attach to the chain is as expensive as the first record access. Benchmarks confirm this linear progression. However, we could easily make every record access after the first one in a chain significantly faster in the DAML interpreter.

There’s a similar linear progression when using get and fastLens. Unfortunately, we have no chance of optimizing chains of record accesses in any way since they are completely intransparent to the compiler and the interpreter.

Conclusion

I think the numbers say everything there’s to say:

MethodTime (in ns)Slowdown vs builtin
builtin461x
monomorphic lens81417.7x
polymorphic lens117825.6x
lens177638.6x

Summary of the benchmarks.

In view of these numbers, I would recommend to everybody who cares about performance to use DAML’s builtin syntax for accessing record fields!

Only focussing on getters might be modestly controversial since lenses also serve a purpose as setters. I expect the differences in performance between DAML’s builtin syntax for updating record fields

    r with field1 = newValue

and using set and a lens to be in the same ballpark as for setters when updating a single field in a single record. When updating multiple fields in the same record, the DAML interpreter already performs some optimizations to avoid allocating intermediate records. Such optimizations are impossible with lenses.

However, when it comes to updating fields in nested records, DAML’s builtin syntax is not particularly helpful:

    r with field1 = r.field1 with field2 = newValue

It gets even worse when you want to update the value of a field depending on its old value using a function f. In many lens libraries this function is called over and can be used like

    over (lens @"field1" . lens @"field2") f r

Expressing the same with DAML’s builtin syntax feels rather clumsy

    r with field1 = r.field1 with field2 = f r.field1.field2

If we ever want to make lenses significantly less appealing in DAML than they are today, we need to innovate and make the builtin syntax competitive when it comes to nested record updates. Who would still want to use lenses if you could simply write

    r.field1.field2 ~= f

in DAML?

—————————————

DAML has also a new learn section where you can begin to code online:

Learn DAML online

representative customer experience flow (enables a single golden source of data while maintaining privacy - eliminating data reconciliation and mismatch, allowing a common view of the business process, making compliance and audit easy)

Digital Customer Experiences Using Smart Contracts – Part 2

In my last blog on Enhancing Digital Customer Experiences Using Smart Contracts, we looked at how customer preferences management can be dramatically simplified using smart contracts. A smart contracts based approach avoids customer preferences management to be treated as an add-on or external database (even if it physically is). This avoids costly reconciliations and process breaks due to data mismatches.  

Today I’m going to build on that premise and discuss how we can streamline the management of customer preferences across multiple companies. This will lead to creation of cross-industry and cross-company customer experiences, while improving operational efficiency and promoting customer privacy. This is different from a business process such as supply chain or trade finance being executed across companies. Our focus here will be on digital and personalized customer experiences.

The motivation for this post is quite simple. Customers are demanding extreme personalization, the digital revolution is causing tough competition on pricing so differentiating and delivering value through engagement is paramount, and time to market for innovation is becoming a critical enabler of business success or even survival. The collaboration model using DAML outlined in this post can help meet all these goals. 

This post is relevant for those who are presenting a business partnership to customers (e.g. co-branded credit cards, airline loyalty programs). This post is also relevant to those who would like to present such a face to their customers but have been unable to do so because of the associated technological and business process complexity  (e.g. multiple complementary retailers as in the ill fated Amex Plenti program).

Examining the problem

In the previous blog post, we considered the example of a credit card company managing customer preferences across its card products and business divisions. 

This time let’s take the example of a fitness center and an insurer who would like to create mutual business benefits:

  • Less insurance claims ( insurer)
  • More and longer fitness subscriptions (for fitness center)

At the same time, this partnership benefits the customers in multiple ways:

  1. Discounts on insurance
  2. Personalized fitness goals 
  3. Discounts on fitness subscriptions.
  4. Seamless subscriptions to 3rd party services such as supermarkets, nutritionists etc

I’ve written about this actual business partnership before if you are interested to learn more. These kinds of customer experience ecosystems will become a norm of the future, so in this post I’m describing how we could accomplish this more simply using DAML, remove reconciliations, and also make this a scalable ecosystem whose membership can grow or shrink on demand.

The first couple of problems you encounter when thinking about such an ecosystem can almost cause you to give up on this ambition. So far there hasn’t been a way to do this using technology that is specifically meant to solve exactly these problems. 

  1. Inviting independent participants to an ecosystem 
  2. Allowing customers to receive uniform experiences as they engage across this ecosystem
  3. Maintaining privacy of customer data between participants 
  4. Eliminating back and forth data transfer and reconciliation between participants

But lets see how a DAML enabled ecosystem can make this a breeze while maintaining privacy and confidentiality of each party without having to do anything more like the below. The overall business process view looks like the below:

representative customer experience flow (enables a single golden source of data while maintaining privacy - eliminating data reconciliation and mismatch, allowing a common view of the business process, making compliance and audit easy)
Representative customer experience flow (enables a single golden source of data while maintaining privacy – eliminating data reconciliation and mismatch, allowing a common view of the business process, making compliance and audit easy)

Onboarding Participants

We have 3 types of participants at least but of course we can make this solution extensible if it is intended to be a platform. 

Customers can be registered from a campaign by creating a customer registration record that they can accept. That allows us to assign other parameters on the underlying customer record. I’ve assumed a single insurer and a single fitness center at this time. I’ve assumed that someone has been designated with maintaining the ecosystem and deciding who should be onboarded. In a closed network, this role can be assumed by either a consortium or one of the participants. The model can also be made more generic to handle different types of entities.

Note on enterprise adoption: The DAML smart contracts based workflow outlined below is exposed through standard REST APIs. Current web, mobile and other applications can continue to function in the usual way with the underlying workflow and a golden source of data being managed by DAML as per the modeled privacy requirements.

Capturing and Sharing Fitness Visits

One of the goals of this partnership is to share healthy practices adopted by the customer with the insurer. This will allow the insurer to do things such as reducing premiums, reimbursing fitness center fees in part or full, and compute the business value of such a partnership. On the other hand the fitness center can do the same.

We do this by allowing the fitness center to send the information on customer visits to the insurer. This is done with customer consent of course. This consent can also be revoked at any time (not shown).

A single underlying customer record allows both insurers and fitness centers to do so. 

Personalizing the Fitness Experience

 Being part of an ecosystem, it is expected that the insurer may want to personalize the fitness goals that the customer should act upon. These goals could be preventative based on the customer’s risk profile, or required to achieve full recovery from something that has already occurred. In the past this has been laden with privacy risks, and data storage compliance issues. With DAML all that is simplified.

The customer will simply allow the insurer to attach already existing health goals. These goals can then be viewed by the trainer at the fitness center and deliver truly personalized fitness.  Such a model can also evolve into an accountability model for each party – customer, trainer, fitness center and insurer.

Expanding the ecosystem

As you may have guessed we can extend this ecosystem to other parties as well. For example, we may want to onboard nutritionists to help develop the health goals, or a supermarket to offer discounts on the right food etc. We may also onboard local marathons and other sports apparel retailers to further personalize this experience. 

In the end, not only does the customer win, but so does every party who is part of the ecosystem. We can say goodbye to blind mass campaigns, and truly personalize the experience through cross-industry customer journeys.

DAML has also a new learn section where you can begin to code online:

Learn DAML online

Community Update – November 2020

Update: 1.7.0 has been released and you can read the full release notes here.

Block8 published part two AND part three of their DAML vs. Corda series. Part 2 covers ease of learning and documentation while Part 3 dives into functionality. A must-read series.

Luciano wants to know the community’s thoughts on how to expand DAML functionality and reduce code duplication via additional libraries and contributions.

What’s New in the Ecosystem

We’ll be holding two community open door sessions for the 1.7.0 RC, one for US-based timezones and one for APAC. Register here for APAC/Europe morning timezones and here for Americas/Europe evening timezones. Both will be on November 9th so go signup! 📝

Shoutouts!

Emil submitted a PR for improving the functionality of CI within a DAML project. While the dev team may have some follow-ups we’re excited to see contributions coming to DAML from outside Digital Asset!!

Thanks to Bart for reporting a problem with HTTP proxying in DAZL and to Davin for adding a flag to disable it. Bart has received the requisite Squidly Devourer of Bugs badge for this.

IMG_4642
Bart showing off his new DAML hoodie that he won during our last community reward ceremony. If you want your own you can win the next one, or pick one up here.

Odyssey starts on November 13th and György, the DAML’r-extraordinaire, responded to our call and decided to take up hacking on the Sovereign Nature track where participants will build systems that improve the collection and distribution of environmental data. They’re looking for more members so reach out if you’re interested in spending a weekend improving the world.

DA is also sponsoring the Outcompeting Destructive Systems track

YHack starts on Saturday and runs until November 14th where students will be using DAML to create their vision of a social network

Corporate News

GFT added support for DAML on Corda! 

Blogs and Posts

Brian pondered what a distributed (and daml-powered) Twitter would look like

György published the latest article in his Masterclass series where he reverse-engineered one of our refapps and showed us how to represent bonds and equity options in DAML. Have thoughts? You can chat about his latest post here.

Stephen shared several Scala tips and tricks, showing us the importance of typing our variables in Scala, how <- really desugars when destructuring, and possibly undesired outcomes of using extends AnyVal.

Emil implemented a doodle.com backend in 95 lines of DAML (+ tests). And he even wrote an article about it.

Ed showed us how to secure and test your DAML APIs.

Martin made a build system in 140 lines of TypeScript. Not exactly DAML but definitely cool 

Other Fun

A fun little story about DAML’s portability from Yuval.

Ayan Works (based out of Pune, India) are looking for DAML developers that can deploy to Fabric and Sawtooth. If you think this sounds like you check out the listing here!

If you didn’t know Richard has weekly updates on security and privacy news, you can check them out here.

Release Candidate for DAML SDK 1.7.0

Highlights

  • DAML Connect has been introduced as a logical grouping for all those components a developer needs to connect to a DAML network.
  • JSON API, DAML Script, and the JavaScript Client Libraries now support reading as multiple parties.
  • daml start can now perform code generation, and has a quick-reload feature for fast iterative app development
  • Support for multi-key/query streaming queries in React Hooks
    • New query functions accepting multiple keys supersede the old single-key/query versions, which are now deprecated.
  • DAML Triggers (Early Access) have an overhauled API that is more aligned with DAML Script
    • This change requires a migration detailed below.

The full preliminary release notes and installation instructions for DAML SDK 1.7.0 RC can be found here.

Impact and Migration

  • The compiler now emits warnings if you use advanced and undocumented language features which are not supported by data-dependencies. This only affects you if you use language extensions not documented on docs.daml.com or import DA.Generics. If you receive such warnings, it is recommended that you move off them. If you are getting unexpected warnings, or don’t know how to migrate, please get in touch with us via the public forum or support.digitalasset.com (for registered users).
  • If you are using stream queries in the React Hooks, we recommend you migrate to the new multi-key versions. The migration is detailed below. The old functions are now deprecated, meaning they may be removed with a major release 12 months from now.
  • If you are using DAML Triggers, you’ll need to migrate them to the new API.

What’s Coming

We are continuing to work on performance of the DAML integration components and improving production readiness of DAML Ledgers, but there are exciting features and improvements in the pipeline for the next few releases as well.

  • The Trigger Service will reach feature completion and move into Beta
  • The authentication framework for DAML client applications (like the Trigger Service) is being revisited to make it more flexible (and secure!)
  • The build process for DAML client applications using the JavaScript/TypeScript tooling is being improved to remove the most common error scenarios
  • DAML’s error and execution semantics are being tidied up with a view towards improving exception handling in DAML
  • DAML will get a generic Map type as part of DAML-LF 1.9