KYC Workflow applying to the investment fund industry

How to reduce the administrative burden for your KYC process with DLT and Daml (with free e-book)

All financial institutions are required to perform “know your customer” (KYC) checks to comply with regulations and tackle risk contagion throughout the holding chain (a chain of “custody” service providers). This is most pertinent to investor onboarding, where due diligence processes require collecting and validating a client’s information, which in turn generates numerous challenges for the onboarding and onboarded parties:

  • Redundant processes across individual holding chain participants
  • Data inconsistencies across individual holding chain participants
  • Compliance difficulties with evolving investor data
  • Delays for onboarded customers

In practice, opening an account at a custodian may take weeks due to onboarding requirements and KYC due diligence. KYC processes usually require gathering information manually from separate sources and systems, information analysis and validation, and may result in duplicative efforts which drive up operational costs and risk compliance across evolving customer data regulations.

It is also important to note that collected data is evolving over time, which re-initiates the same KYC steps on the updated data.

Distributed Ledger Technology can be leveraged to solve for all of these challenges

By its very definition, distributed ledger technology (DLT) supports a decentralized model of record-keeping. As such, DLT and a smart contract solution is particularly well-adapted for setting up and maintaining a shared record of information through workflows involving different stakeholders. Furthermore, the immutable nature of DLT, along with its ability to create a secure, yet logically shared environment, in open or private networks, creates a record-keeping system where multiple parties have real-time access to permissioned data and can engage simultaneously on processes.

Digital Asset partnered with IntellectEU on a Daml-based KYC application deployed on DLT.

IntellectEU created a Daml-based KYC application that can be implemented across a capital market, where data is uploaded to a DLT platform, integrated with (or built as an extension of) a legal entity identity (LEI) register and a personal identity register. The KYC/due diligence information would be stored, validated, and maintained on a shared record according to defined workflows involving both the onboarded and onboarding parties and enforced via Daml smart contracts. Here is how the solution would function:

First, Daml assigns actions to onboarded/onboarding parties such as documentation submission requirements prior to market participation. The Daml smart contract shares that documentation with the permissioned parties on a distributed ledger in real-time. The DLT platform ensures data immutability so parties only have to submit documentation once. Ad-hoc requests for documentation can also be executed using Daml which updates the ledger without losing previous documentation/records from the initial onboarding process.

Once a KYC/due diligence process has been performed by one provider in the capital market, the shared record on the DLT platform can be made accessible to any other capital market provider willing to engage with the same entity. A new onboarding party will run its own KYC process from the shared record and will confirm, correct, or add information to the shared record. Financial services providers/intermediaries must still perform their own KYC checks when onboarding new clients, but the solution drastically accelerates the process.  

KYC Workflow applying to the investment fund industry

It’s worth noting, that access to the shared record is to be done under strict consent, provided by the onboarded company through the smart contract. Further, as each market, service type, or financial product type has its own specificities and risks, the solution enables different standardized, but customizable data templates and workflows.

A DLT KYC/due diligence solution is a founding step to enable the set-up of beneficial ownership registers and to create a fluid, but risk-controlled access to a wide variety of markets. It is also a key part of a wider LEI implementation. 

The advantages that a DLT-with-Daml KYC solution generates, in terms of cost reduction and customer experience for both the onboarding and onboarded organizations, should push for its capital-market-wide adoption.

You can learn more about how Daml and DLT can solve KYC process challenges and costs

…including how the solution can comply with additional regulatory requirements, such as the European General Data Protection Regulation and its “right to be forgotten”. Download here the “Digitally Transforming Securities Services” e-book on DLT, smart contracts and their practical applications in the IntellectEU and Digital Asset e-book and learn how IntellectEU can bring greater efficiency to capital market players through Daml-driven applications..

Download for free the e-book

How to Build Distributed Applications Today with the Option to Decentralize Later

Digital Asset launches Daml for PostgreSQL

Distributed applications have been popularized by distributed ledger technologies (DLT), especially blockchain. With that growing popularity, misnomers have surfaced leaving many organizations believing they can only build multi-party applications on a blockchain. We are here to dispel the myth. Blockchain is not the only solution. You can build distributed applications using different technologies, including traditional databases; and, with a path to decentralization when you are ready.

Introducing Daml for PostgreSQL

Either internally or externally, most companies struggle with historically complex business processes, involving numerous touch points, data governance and cross-departmental workflows. Keeping these systems in sync is a complex matter often only addressable by manual checking and reconciliation. There is also the financial burden. As companies expand and modernize, manual processes can inhibit revenue generating activities and increase operational expenditure. Also, not every company is ready to deploy a DLT-based platform or is working through budget and IT approvals to get a blockchain project off the ground.

For enterprises wanting to improve collaboration and automate repeatable workflows using distributed applications, Digital Asset offers an alternative solution to DLT and blockchain. In cases where you don’t need a fully decentralized blockchain network, you can still use the power of Daml, an immutable, programmable smart contract language sans the extra complexities, and pair it with PostgreSQL, giving you Daml for PostgreSQL, a DLT-like environment without the DLT. Daml for PostgreSQL builds on legacy infrastructure while providing additional privacy and controls to all application users through the use of Daml smart contracts. Daml digitizes workflows and models business logic into distributed applications. Daml for PostgreSQL also acts as a stepping stone to prove out blockchain and DLT investments.

Don’t let the financial and operational overhead of managing a new technology stack get you down. Start unlocking innovation today with Daml for PostgreSQL. Deploy production-ready and multi-party applications on current infrastructure and migrate anytime when business needs change without losing valuable code.

Learn more about Daml for PostgreSQL here.

Porting Chainstack’s `No Ticket Scalping` CorDapp to Daml for Corda

This article discusses the steps taken to port Chainstack’s ‘No Ticket Scalping’ CorDapp to a Daml application running on a Daml for Corda deployment. This article will concentrate on the technical steps required; for a more in-depth description of the application itself, see Chainstack’s write up. The ported application is available on GitHub.


The approach taken was first to fully understand the CorDapp, then to switch it to a Daml-based application runnable on a Daml for Corda deployment, following the steps below.

  • Port the CorDapp contracts to Daml
  • Drop the web application portion into a new project
  • Switch the web application to use a Daml Ledger API based client in place of the Corda RPC one.
  • Code-generate the Daml contract bindings
  • Implement the controller methods using the Daml ledger client/bindings
  • Test the Daml application using the Daml Sandbox
  • Deploy to Daml for Corda.


The CorDapp was not intended to be a fully featured application, more a minimal example of how the different components integrate together. The porting process continues this theme by primarily highlighting the techniques used to port rather than enhancing the capabilities of the app.

Porting the CordApp Contracts to Daml

The application only has one contract. This contract signifies one ticket distributor giving another distributor the right to sell a number of tickets for a particular event. This is shown in Daml below:

To see how this might fit into a broader business flow modeled in Daml, have a look at the ticket issuance Daml example. This demonstrates how Daml is used to construct a chain of obligation from promoters, to venues and artists, then to distributors, and finally the fans themselves.

Move Over the Web Application Code

As we are no longer constrained by CorDapp patterns, we opted to use a Kotlin DSL-based Gradle project. We need to add a dependency on Kotlin and Spring Boot, which are used by the web application. The Corda dependencies are replaced with a dependency on the Daml rxjava client.  With this done we just drop in the source code from the web application itself.

Connect to the Daml Ledger Client

This is simply a case of constructing a DamlLedgerClient in place of a CordaRPCClient:

The client connection is made following the construction of the spring boot application

Daml Contract bindings

With Corda, the smart contract state is represented as `ContractState` state classes. These are available to application Java code. Similarly with Daml, rather than using the generic data structures to construct and parse Daml commands and contracts, it is possible to generate Java bindings that make the Daml contracts available in Java. We instruct the Daml command line assistant to generate code for us:

Generation of java bindings for Daml contracts 

Implement the Controller Methods

The original application was well designed and had a clean web service based interface between the web application and the backend services. This meant that the only changes needed to the UI were a few name changes.  

The controller methods needed to be re-implemented in terms of Daml contracts rather than Corda state.

How the /api/noScalping/distributions web service endpoint is served


To debug the initial version of the ported application, we ran our Daml code in the Daml Sandbox, available in the Daml SDK. This quickly allowed problems to be identified and fixed. Once the model had been debugged (in other words, once the Daml code itself was internally consistent) the application was ready for deployment onto one of Chainstack’s on-demand Daml for Corda nodes. 

The ported web application

Deploying onto Chainstack’s on-demand Daml for Corda nodes

Now we have the application ported time to get it running on Daml for Corda.  Start by requesting for a Daml for Corda node to be provisioned.

Provisioning a Chainstack Daml for Corda Node

Once the node is set up it is possible to log into the Chainstack console and view the running nodes.  Once provisioned the Daml for Corda nodes will have the Daml for Corda CorDapps pre installed and a Daml Ledger API running in the background.

The Chainstack node will come with Daml CorDapps loaded

Daml for Corda nodes will have, in addition to Corda RPC port a Daml Ledger API port.  This is the port that should be used when issuing Daml ledger commands to upload DAR files or create users.  For more detailed instructions about the ledger initialization see the README for the ticket application.

The ticket.dar file being uploaded to the Daml for Corda node in the example above running on port 10000.


In this case starting with a well designed application and using the approaches below allowed this modest application to be ported from Kotlin Corda to Daml for Corda in around a day.

In doing so we have managed to replace the contracts and workflows sub-projects with a single Daml file which makes the application easier to reason about and maintain.

To see or run the ported application see the GitHub repository. Furthermore, in the Daml marketplace you can find more information on the Daml for Corda by Chainstack:

Daml for Corda by Chainstack

Secure DAML Infrastructure - Part 1 PKI and certificates

Secure Daml Infrastructure – Part 1 PKI and certificates

Daml is a smart contract language designed to abstract away much of the boilerplate and lower level issues allowing developers to focus on how to model and secure business workflows. Daml Ledger and a Daml Model defines how Daml workflows provide ways to control access to business data, the privacy guarantees and what actions parties can take to update the state model and move workflows forward. Separately, specific underlying persistence stores (whether DLT or traditional databases) provide various tradeoffs in terms of security guarantees, trust assumptions and operational and infrastructure complexity.

In this post we focus on lower level, infrastructure and connectivity concerns around how the processes making up the Ledger client and server components authenticate and authorise, at more coarse grain level, the connections and command submissions to a ledger. In particular, we focus on:

  • Secure HTTPS connections over TLS with mutual authentication using client certificates.
  • Ledger API submission authorization using HTTP security tokens defining the specific claims about who the application can act as.

This entails some specific technologies and concepts, including:

  • Public Key Infrastructure (PKI), Certificate Authorities (CA) and secure TLS connections
  • Authentication and Authorization Tokens, specific JSON Web Token (JWT) and JSON Web Key Sets (JWKS)

There are many other aspects for deploying and securing an application into production that we do not attempt to cover in this post. These include technologies and processes like network firewalls, network connectivity and/or exposure to the Internet, production access controls, system and OS hardening, resiliency and availability. These are among the standard security concerns for any application and are an import part of any deployment. 

To demonstrate how connectivity security can be implemented and tested, we have provided a reference application – – that implements a self-signed PKI CA hierarchy, and how it obtains and uses JWT tokens from an oAuth identity provider (we use Auth0 ( as a reference but Okta, OneLogin, Ping or other oAuth providers would work as well) or from a local JWT signing provider. This builds on the previous post Easy authentication for your distributed app with Daml and Auth0 that focused on end-user authentication and authorisation.

The deployed components are detailed in the following diagram:

At the center is the Ledger Server, which in this case we are using Daml Driver for PostgreSQL for persistence. A PKI service is provided to generate and issue all TLS, client and signing certificates. We connect to a token provider, either an oAuth service like Auth0 or a self-hosted JWT signing service (for more automated testing scenarios like CI/CD pipelines). Clients connecting to the Ledger include end user applications, in this case a Daml React web application, as well as automation services (in the form of Daml Triggers and/or Python automations). 

We have included the JSON API service for application REST based access, which requires a front-end HTTPS reverse proxy, in this case NGINX, when operated in secure mode. We have also included an Envoy proxy for GRPC, which allows the implementation of many other security concerns (for example, DOS rate limiting protection, authentication token mapping, auditing amongst many others). In this case it is a simple forwarding proxy as these capabilities are out of scope of this article.

For completeness we also include how a developer or operator might connect securely to the Ledger for Daml DAR uploads or using Navigator. In practice, most developers develop in unauthenticated mode against locally hosted ledgers.

Secure Connections

The first step to ensure a secure ledger is to enforce secure connections between services. This is accomplished with TLS connections and mutual authentication using client certificates.

If you understand the concepts of PKI, TLS and certificates, you may choose to skip the following backgrounder.


Transport Layer Security (TLS) is a security protocol to allow an application (often a web browser but also automation services) to connect to a server and negotiate a secure channel between the two of them. The protocol relies upon Public Key Infrastructure (PKI), which in turn utilises public / private keys and certificates (signed public keys) to create trust between the two sides. TLS, often seen as HTTPS (HTTP over TLS) for web browser use, is most often used to allow an application or browser user to validate they are connecting to an expected endpoint – the user can trust they are really connecting to the web site they want. However TLS also allows a mutual authentication mode, where the client also provides a client certificate, allowing both sides to authenticate each other – server now also knows which client is connecting to it.

So what are private and public keys and a “certificate”? Cryptography is used to create a pair of keys (large numbers calculated using a number of known algorithms – e.g. RSA or EC). The keys are linked via the algorithm but knowledge of one does not allow calculation of the other. One forms the “private” key – known only to the owner of the key pair and a “public” key that can be distributed to anyone else. Related cryptography algorithms can then be used to sign or encrypt data using one of the keys and the receiving party, holding the other key, can then decrypt or validate the sender or signature. 

In many cases a public key (a large number) by itself is insufficient or impractical, so a mechanism is needed to link metadata or attributes about the key holder to the public key. This might include identity name and attributes, key use, key restrictions, key expiry times, etc. To do this a Certificate Authority (CA) is used to sign the metadata with the public key, resulting in a certificate. Thus anyone receiving a copy of the certificate can get the public key and data about how long this is valid for, who is the owner and any restrictions on use. Various industry standards, in particular X509, are used to structure this data. Applications are then configured to trust one or more Certificate Authorities and subsequently validate the certificates and that they are not revoked. Modern web browsers come with a default set of trusted CAs that issue public certificates for use on the Internet. 

As a general best practice, Certificate Authorities are structured into a hierarchy of CAs – a root CA that forms a root of trust for all CAs and certificates issued from this service, and sub or intermediate CAs that issue actual certificates used by applications. Clients are then configured to trust this root so they can then validate any certificate they come across against this hierarchy. The Root CA is frequently set up with a long lifetime and the private keys are stored very safely offline in secure vaults. Full operation and maintenance of a production PKI involves many security procedures and practices, along with technology, to ensure the ongoing safety of the CA keys. “Signing ceremonies” involve several trusted individuals to ensure no compromise to the keys. 

Testing PKI and Certificates

Note: the following code snippets come from the bash script in the reference app repo. Please see the full code for all the necessary steps – we have excluded some details for brevity.

To demonstrate use of TLS and mutual authentication, we set up a demonstration PKI Certificate Authority (CA) hierarchy and issue certificates. This is diagrammed below:

Demonstration of the PKI Certificate Authority (CA) hierarchy

The Root CA forms the root of the trust hierarchy and is created as follows:

We have created a Root CA called (the example allows you to configure the name – default for $DOMAIN is a test domain of “” for “Acme Corp, LLC”) and an X509 name of /C=US/ST=New York/O=Acme Corp, LLC/ and it has a lifetime of 7300 days or 20 years.

We won’t go into the specifics but the Root CA openssl.conf file defines extension attributes ([v3_ca] section – basic constraints – i.e. allow to be a CA and key use for signature and key signing)

Since the Root CA is often offline and securely protected, it is common for an Intermediate CA to be created to issue the actual certificates used for TLS and client authentication and code signing. There are many tradeoffs on how to structure a CA hierarchy, including separate CAs for specific uses, organisational structures, etc but this is beyond the scope of this blog.

For this reference, we create a single Intermediate CA as follows:

Here the Intermediate private key is created. We then generate a Certificate Signing Request (CSR) which is then sent to the Root CA for validation and signing and an Intermediate CA certificate is returned. In reality, the CSR is submitted to the Root CA and there would be a set of out of band security checks on the requestor before the root would sign and return a certificate. You will see this whenever you request a certificate from a public trusted CA – often Domain Verification (DV) which relies upon DNS entries or web files to verify you own the domain or server.

Similarly to the Root Certificate, attributes ([v3_intermediate_ca] in Root openssl.conf) are set on the certificate for name, constraints and key usage, lifetime (10 years) and the access points to retrieve the Certificate Revocation List (CRL) and OCSP service. These last two are used by clients to validate the current status of a certificate and whether the CA has revoked the certificate. We also set a constraint for pathlen of 0, which states that no sub-CA are allowed from this Intermediate CA, only endpoint certificates can be issued.

Now we have the CA hierarchy configured, we can issue real endpoint certificates. Let’s create a certificate for the Ledger Server:

IMPORTANT NOTE: We generated an RSA private key, with a specific difference that we created a PKCS#8 format key file using the “genpkey” option and have to specify the algorithm. We need to do this as many TLS frameworks (including that used by Ledger Server, PGDB, NGINX) will only work with PKCS#8 format key files. The difference between PKCS#8 and PKCS#1 is that the former also contains additional metadata about the key algorithm and supports more than RSA, e.g. Elliptic Curve (EC). If you look at the files the main visible difference is the first line.


PKCS#8 <- Required format for TLS Frameworks

A client certificate is constructed via similar steps:

An additional attribute that we set on the server and client certificates is subjAltName (SAN) and this is generally the IP and/or DNS name of the server. Modern browsers are now enforcing this practice to tie a certificate to a specific destination DNS name, not just an X509 name.

The scripts generate certificates for each endpoint, specifically PostGresQL DB, Ledger Server, web proxy, envoy proxy, auth service. 

In the reference app directory this is seen as a file system hierarchy:

  • Certs
    • Root
      • Certs
      • Private
      • Csr
    • Intermediate
      • Certs
      • Private
      • Csr
    • Server
      • Certs
      • Private
    • Client

Enabling Certificates for reference app services

Well done for getting this far. We now have a 2-tier CA hierarchy created and have issued TLS certificates and a client certificate. How do we enable their use within each of the services?

PostgresQL DB

We use Docker images to run PostgresDB for persistence of the state of the Ledger. If we first look at the setting required to enable TLS on PostgresDB

This tells PSQL where to find the server certificate, private key and the CA trust chain (the Root and Intermediate public keys). The remaining parameters tell PSQL to enforce TLS 1.2 and a secure set of ciphers.

The full Docker command is:

This runs a copy of PSQL 12 Docker image and maps the certificates from the local file system into the container, sets a different default password (security good practice) and exposes port 5432 for use.

Ledger Server

The Ledger Server now needs to be run with TLS enabled. This is done via:

For certificate use, the key parameters are:

  • –client-auth require
  • –cacrt <CA chain>
  • –pem <private key file>
  • –crt <public certificate>
  • –sql-backend-jdbcurl

The client-auth parameter enforces the use of client mutual authentication (can also be none or optional), clients will need to present a client certificate for access. The cacrt, pem and crt parameters enable TLS and link to the CA hierarchy trust chain. The key point for sql-backend-jdbcurl is the “ssl=on” parameter which enables use of a TLS connection to the database.

Navigator, Trigger and other Daml clients

The Daml services that connect to a Ledger, including Navigator (web based console for the ledger), Trigger (Daml automation), JSON API (REST API proxy) need to be enabled for TLS and client certificate access (if required)

If we use an example of a Daml Trigger:

The important parameters are:

  • tls
  • pem, crt, cacrt

Parameter “tls” enables use of TLS for Ledger API connections. Parameters pem, crt point to the private and public key of the Client certificate to identify the application. Cacrt is the same as Ledger Server and points to the CA trust chain.

Other applications will have framework or language specific ways to enable TLS and client certificates.

Summary & Next Steps

So far we have described how to setup and configure a two tier PKI hierarchy and issue certificates. We’ve described how to enable services to use these certificates.

The provided scripts can be used to create and issue further certificates allowing you to try out various configurations. The scripts have been parameterised so you can change some aspects of the environment for your purposes:

  • DOMAIN=<DNS Domain, default:>
  • DOMAIN_NAME=<Name of company, default: “Acme Corp, LLC”>
  • CLIENT_CERT_AUTH=<enforce mutual authentication, TRUE|FALSE>
  • LEDGER_ID=<Ledger ID, CHANGE FOR YOUR USE!, default: “2D105384-CE61-4CCC-8E0E-37248BA935A3”>

Further details on steps and parameters are described in the reference documentation.

Client Authorization, JWT and JWKS

The next step is how to authorise the client’s actions. This uses HTTP Security Headers and JSONWeb Tokens…… this is the topic of the next blog…….coming soon. If you are interested in Daml and Auth0, you can check the previous post here:

Easy authentication for your distributed app with Daml and Auth0

Answering the call: Distributed Twitter made easy!

Jack Dorsey, the CEO of Twitter sent a series of tweets at the end of 2019 saying that Twitter is researching in moving towards a decentralising version of Twitter :

As part of these tweets, Jack lays out his reasons for moving towards the decentralisation of Twitter and Social Media. The first reason Jack mentions is the challenges Twitter are facing with their current centralised solution, specifically that applying global policy to a centralised data set is unlikely to scale :

Having a centralised solution for a global product is challenging in today’s ever changing world. How do you ensure that your data meets the standards set by the multiple jurisdictions in which you operate in? How do you ensure that you are not applying the policies of one jurisdiction to the data of another jurisdiction? What do you do when you have conflicting policies to enforce?

From a user’s perspective, once you post a message on Twitter (or other Social Media platforms) you lose all control over that content – you don’t know how your content is monetised, to whom your content is sold to, for what purpose was your data sold, etc. 

Benefit of Daml and Smart Contracts

Daml is a purpose-built smart contracts platform aimed at creating distributed applications with strong privacy and integrity guarantees. Daml applications can run on multiple DLTs, blockchains and databases without requiring any changes – write once, run anywhere. You can use your favorite programming stack (React, JavaScript, Java, etc.) to work with the Daml smart contract layer.

Daml uses the concept of “parties” to represent actors which can be individuals or entities. The disclosure and distribution of data is controlled by the roles these actors play. Actors can have rights (can take actions), be obligable (are accountable for the data they create), and be observers (they can see but not have any obligations or rights). 

Since the workflows we define in Daml can run on databases and blockchain alike, developers and business analysts need not think about the underlying platform intricacies. They can instead focus their energies on defining the business logic, which will automatically control the distribution and disclosure of data that the workflow generates. Using a chess analogy, focus on winning the game, not on negotiating the rules and setup of the board.


In this blog post, we’re going to focus on providing the minimum functionality (ie. a Minimum Viable Product) which provides the user with full control over their profile and the content they create. We’re going to use Daml to model the Smart Contracts and explicitly define what actions each party can perform. Since we’re using Daml we can write the application once, test it using the sandbox (which comes with Daml) and later run the application using our DLT of choice (which will be covered later in a blog post!). The frontend will be written in React, using the Daml Javascript library used to connect to the ledger.

Before diving into some of the implementation details, here’s a video showing the full functionality of the prototype:

Time for some Smart Contracts!

Central to providing the user with more control over their profile and content is the user profile. Lets create a basic user profile which allows the user to :

  1. Be the owner to their profile and allows them to maintain it;
  2. Accept or reject requests from other users to follow them;
  3. Create posts in the system;

We also need to provide a mechanism for users to find each other in the system :

The ‘signatory’ specifies which parties consents to the creation of a contract. On our contract, we only require the consent of the user. However, other parties must be able to view this contract (otherwise this won’t be a very social network!), which is where `observer` comes in. We’ve specified that other users who have made a request to follow a user may view the contract and also a ‘userDirectory’ party. The ‘userDirectory’ party will allow users to find each other via the front-end public user directory and thus they can make a request to become a follower of a user. This user will be a specific read-only user specifically for this public user directory. Also, we define a key (the username) which allows us to fetch this contract directly a defined key which we will use from the front-end to lookup this contract.

To enforce the observers have a read-only view of this contract, we simply don’t provide any choices for them to perform. That was easy! 

Let’s define the choices that only the owner of the contract (ie, the user) can perform against this contract:

The “controller username can” syntax allows us to explicitly specify the set of parties which may modify our contract and the modifications they can make. In our case, we want to limit the choices to the owner of the contract (ie, the end user), meaning no other party will be able to modify our contract. 

Some of the content in the choice’s have been left out for brevity, please take a look at the open source repository for more details on the implementation.

Interacting with our new models

For the front-end of our “Distributed Twitter” application, we are going to use the React framework (and Typescript). Thankfully, Daml provides us a set of libraries which provides a set of functionality to create smart contracts, execute choices, streaming, etc. for multiple languages.

In order to get started with interacting with the ledger, lets create Typescript representations of our models and choices. Luckily, Daml can do this for us via the command :

Running this command will generate all the boilerplate code of our Daml model templates specified earlier in the specified language of choice, in this case Javascript/Typescript. It’s also possible to generate code in other languages such as Java and Scala (please see the docs for more information).

For example, this command will generate the following type in Typescript based of our user defined earlier in Daml :

Constants relating to the choices which may be exercised against a given contract in Daml are also generated – This greatly simplifies the exercises of choices :

To create a new contract on the ledger, we simply execute the following :

We have our first smart contract on the ledger in only a few lines of code! We used the ‘useLedger’ React hook to get an instance of the ledger and ‘useParty’ to get the current user connected to the ledger. To create the contract on the ledger, we call the ‘.create’ function on the ledger instance with the target contract type with the details of the contact we wish to create.

We have our first smart contract on the ledger in only a few lines of code!

Now let’s modify this contract by exercising a choice defined against this contract :

We use the ‘exerciseByKey’ function which allows us to exercise a choice against an existing contract, identified by its key. Our User contract is identifiable by key via the ‘username’ of the contract, so providing this along with the choice we wish to call and the parameters to the choice (in this case, the updated ‘User’) we can modify the contract.

This is the UI for updating a user

We’ve seen how easy it is to create contracts and modify them (via choices), what about viewing contracts? Of course the Daml library provides functions to retrieve contracts from the ledger! The React library contains React hooks which allow us to stream changes directly from the ledger which means we can update our UI without having to refresh the page! 

This will stream all the User contracts which you can observe, which will be your own user contract. In the case of the public user directory, all User contracts are displayed in a read-only view. Any changes to the contract set will update the stream and React will re-render the UI to reflect the change. It’s also possible to provide additional parameters to the Daml React hook ‘useStreamQuery’ to filter on a subset of contracts.

Here’s some more screenshots from the prototype – feel free to explore the implementation of the prototype here at our open source repository:

This is a follow request from a user
This is a reply to a post
A user’s view of their profile
A comment thread


In conclusion, we’ve learnt how simple it is to use Daml to create Smart Contracts and how to return back control and ownership of data back to the end user. We’ve also learnt how to create, modify or view these contracts from another layer (in our example, the UI layer) and how Daml takes care of all boilerplate code and provides features to make it simple to interact with the underlying ledger. 

Please take a look at the open source repository for further implementation details and all pull-requests are welcome:

Check the open source repository

Also check this application in the OpenWork website:

Check the OpenWork Feed

Community Update – October 2020

WOW. This sure has been a busy month in the world of Daml. We’ve got a ton of new docs, some bug fixes, and an assortment of blogs and posts. Check it all out below.

What’s New in the Ecosystem

Update: 1.6.0 has been released and you can read the full release notes here.

We’ll be holding two community open door sessions for the 1.6.0 RC, one for US-based timezones and one for APAC. Register here for APAC/Europe morning timezones and here for US/Europe evening timezones. Both will be on October 12th so go signup! 📝

Bugs and Rewards and What Even Is a Blockchain?

So first off some shoutouts to all our lovely community members who helped us find and fix bugs! They’ve all received the coveted Squidly Devourer of Bugs badge for these!

  • Huge thanks to György for discovering that overflows and to Sofia for fixing it (#7393)! There are more details on this bug in the release notes.
  • And another thanks to György for discovering an issue in qualified name handling (#7544)
  • Thanks to Rocky for pointing out that we didn’t document setting environment variables in Windows (#106)

Bart Cant won our second Community Recognition Ceremony. We’ve shipped him a hoodie and are producing his bobblehead which he’ll hopefully be able to show off in a few weeks.

We’re also diving deep on how we think about blockchains/distributed ledgers as a class of technology. Have your own thoughts? Chime in on the forum.


We’ll be participating in two hackathons 👩‍💻👨‍💻 in November!

Corporate News

ISDA will be using Daml to pilot their Common Domain Model (CDM) for clearing of interest rates. Read the full press release here

Omid Mogharian recently wrote a post about improving data distribution using Natix EdgeDrive, a solution that’s partially powered by Daml. ⚙️

Daml is now on Chainstack! which is a platform for deploying applications across multiple different networks. Currently they support Daml on Corda and plan to add more soon.

And lots more news in Digital Asset’s latest version of inter/connected

Blogs and Posts

Confused about public and private ledgers and blockchains? Want to explain them to your family and friends? Check out Anthony’s write-up on the key difference. Or hop into our in-depth discussion over on the forum. 💬

Phoebe showed us how to run Daml applications across multiple ledgers using Canton.  ⛓⛓

KC Tam took the time to walk us through Daml’s Propose-Accept workflow, you can read more of his thoughts here.

As I’m sure you’ve heard the Tokyo Stock Exchange recently had a day long outage during trading hours 📈. We were subsequently inspired to write a blog post about how distributed ledgers solve for this class of operational issues.

György  published his second part of his series on Daml’s Finance Library. In this one he shows how to have a dynamic set of signatories, and advanced ledger updating techniques. 

Zohar Hod gave a webinar on data control and monetization, view the recording here.

Sofus did some more work on his Business Process DSL.

Other Fun

Listen to Tim talk about Daml on Block, stock, and barrel. Easily one of the best names ever for a podcasts 🎤 .

Yuval started a new Damler Strava club so go get your exercise 🚴‍♀️ on with a bunch of us at DA! We’ve already ballooned from 9 to 28 members.

Richard adapted a song to be about Daml which instantly makes it better.

Release Candidate for Daml SDK 1.6.0

The preliminary release notes and installation instructions for Daml SDK 1.6.0 RC can be found here.

1.6.0 RC Highlights

What’s Coming

We are continuing to work on performance of the Daml integration components and improving production readiness of Daml Ledgers, but there are exciting features and improvements in the pipeline for the next few releases as well.

  • The Trigger Service will reach feature completion and move into Beta
  • The authentication framework for Daml client applications (like the Trigger Service) is being revisited to make it more flexible (and secure!)
  • The build process for Daml client applications using the JavaScript/TypeScript tooling is being improved to remove the most common error scenarios
  • Daml’s error and execution semantics are being tidied up with a view towards improving exception handling in Daml
  • Daml will get a generic Map type as part of Daml-LF 1.9

Release of Daml SDK 1.6.0

Daml SDK 1.6.0 has been released on October 15th 2020. You can install it using:

daml install latest

Note: The 1.6.0 RCs had an issue when migrating from pre-1.0.0 ledgers. This is fixed in the stable release.

There are no changes requiring changes for existing applications. However we have fixed a bug in which changes its behavior (see #7393) and are deprecating some functionality which may be removed from Daml after a minimum of 12 months so please read below.

Interested in what’s happening in the Daml community and its ecosystem? If so we’ve got a jam packed summary for you in our latest community update.


Impact and Migration

As part of the scope and status definition of core Daml technology as well as with the introduction of better stream queries in the JavaScript Client Libraries, there have been some deprecations. However, as per the guarantees given by semantic versioning and the feature and component status definitions, this merely means that these features may be removed with a major version release no earlier than 12 months from today.

If you are using any of the features listed below, you are advised to migrate to an alternative, detailed in the more detailed notes further down, within that time frame. Deprecations:

Ledger API Bindings


  • daml damlc package assistant command
  • daml 1.2 pragma
  • daml new with positional arguments

JavaScript Client Libraries

  • streamQuery and streamFetchByKey

Documentation on Versioning, Compatibility, and Support


Since the release of Daml SDK 1.0, great care has been taken to ensure proper forwards and backwards compatibility, and ensure the stability users rightfully expect of mission-critical software like Daml. Until now, however, the specific compatibility and long-term support guarantees that were in place were not articulated, nor was there a clear, reliable definition of what constitutes Daml’s “public API.” 

The new documentation addresses this by adding a number of new pages covering Scope, Status, Compatibility, and Evolution of core Daml technology. As part of bringing clarity to the status and evolution of Daml, there has been a bit of housekeeping and some features that have been superseded, or have not seen much uptake have been deprecated. As per the guarantees given by semantic versioning and the feature and component status definitions, this merely means that these features may be removed with a major version release no earlier than 12 months from today. See “Impact and Migration” below for specifics and recommended alternatives.

Specific Changes

Impact and Migration

First and foremost, it is important to understand that there is no immediate need to act. Deprecation does not mean removal or end of support. Deprecation is an early warning that the feature may be removed, or at least lose support after an appropriate deprecation cycle. In the case of entire features like the Ledger API Bindings, these cycles are described on the feature and component status definitions page, and the current guarantee is that support will continue for at least 12 months, and can only end with a major version release.

Smaller interface changes like the daml damlc package command are simply covered by semantic versioning, which means they will stay supported at least until the next major release.

  • Users of the Scala and Node.js Ledger API bindings are advised to switch to a combination of the Java Ledger API Bindings and JSON API.
  • Users of the Java Reactive Components are advised to switch to the JSON API to replace LedgerView, and simply react to events from the Ledger API Transaction Service to build event based automation. We are also actively working on making Daml Triggers a stable component so for solutions not going into production before Q2 2021, those are another recommended option.
  • daml damlc package should be replaced by daml build.
  • daml 1.2 pragmas should simply be removed.
  • Use of daml new with positional arguments should be changed to the form with the --template argument.

Better Stream Query Features in Javascript Client Libs


The existing streamQuery and streamFetchByKey functions accepted at most one query or key for which to stream data. Two new methods have been added to daml-ledger: streamQueries and streamFetchByKeys. They are similar to the existing singular versions, except they can take multiple queries and multiple keys, respectively, and return a union of the corresponding individual queries/keys. Because these new functions can do everything the existing ones can, we are deprecating the old forms, though in line with semantic versioning, they will not be removed until the next major version at the earliest.

Specific Changes

  • Addition of streamQueries function to daml-ledger
  • Addition of streamFetchByKeys function to daml-ledger
  • Deprecation of streamQuery in daml-ledger
  • Deprecation of streamFetchByKey  in daml-ledger

Impact and Migration

The upgrade path is straightforward:

  streamQuery(t); => streamQueries(t, []);

  streamQuery(t, undefined); => streamQueries(t, []);

  streamQuery(t, q); => streamQueries(t, [q]);

  streamFetchByKey(t, k); => streamFetchByKeys(t, [k]);

There is one caveat, though: streamFetchByKeys is a little bit less lenient in the format in which it expects the key. If your existing code conforms to the generated TypeScript code, everything should keep working, but if you were using plain JS or bypassing the TS type system, it is possible that you used to construct keys that will no longer be accepted. The new function requires all keys to be given in the output format of the JSON API,  which is a little bit more strict than the general JSON <-> LF conversion rules.

New Chapters in the Introduction to Daml


An Introduction to Daml is intended to give new developers a crash course in Daml Contract development, getting them to the point of proficiency and effectiveness as quickly as possible. Several areas were identified as insufficiently covered either there or elsewhere in the documentation, so new sections are now available covering those gaps.

Specific Changes

Impact and Migration


Minor Improvements

  • Added undefined to Prelude, allowing the stubbing of functions, branches, or values during development.
  • daml start now has a --navigator-port option allowing you to specify the port for navigator’s web server.
  • Daml Script has gained two new query functions: queryContractKey and queryFilter, allowing the querying of active contracts by key, or by type with a predicate.
  • The compiler will now emit a warning when you have a variant type constructor with a single argument of unit type (). For example, data Foo = Bar () | Baz will result in a warning on the constructor Bar.  This is because the unit type will not be preserved when importing the package via data-dependencies. The correct solution, usually, is to remove the argument from the constructor: data Foo = Bar | Baz. Note that this rule does not apply when the variant type has only one constructor, since no ambiguity arises.
  • DAR files are now already validated on the Ledger API server before uploading them to the ledger. This may increase upload time, but means the risk of invalid packages polluting the ledger is reduced.


  • Newtype constructors must now have the same name as the newtype itself, since newtypes are record types. This restriction has been in place for Record Types for a long time, and it was an oversight that the compiler did not enforce this for newtype. This does not affect Daml compiled with older SDK versions in any way.
  • now raises an error when the day argument is outside the valid range. It previously rolled over into the next month in an unsafe fashion. This does not affect Daml compiled with older SDK versions in any way.
  • Authorization checks are interleaved with execution. This resolves a number of minor privacy leaks. See #132 for details.
  • Fixed a bug in the JavaScript client libraries where, upon closing a stream, the underlying WebSocket connection may not be properly closed.

Integration Kit

  • In kvutils, the BatchedSubmissionValidator no longer has a parameter for commit parallelism. Commits are now always written serially to preserve order.
  • In kvutils, state is now sorted before committing. This allows us to provide stronger guarantees with regards to the serialized write sets. If you have implemented your own CommitStrategy, you should also ensure the output state is sorted before committing.
  • The StandaloneApiServer now takes a healthChecks parameter, which should identify the health checks to be exposed over the gRPC Health Checking Protocol. This will typically look something like:

      healthChecks = new HealthChecks("read" -> readService, "write" -> writeService)

    Integrators may also wish to expose the health of more components. All components wishing to report their health must implement the ReportsHealth trait.
  • Fixed a race condition in the Ledger API Test Tool in which multiple connections were created to a single participant, and only one was shut down properly. This error was likely benign but may cause spurious errors to show up when the test tool finishes and shuts down.
  • The hardcoded timeout for party allocation and package uploads in the Ledger API Server can be configured via ParticipantConfig and the default value is now set to 2 minutes (#7593 & #6880).

What’s Coming

We are continuing to work on performance of the Daml integration components and improving production readiness of Daml Ledgers, but there are exciting features and improvements in the pipeline for the next few releases as well.

  • The Trigger Service will reach feature completion and move into Beta
  • The authentication framework for Daml client applications (like the Trigger Service) is being revisited to make it more flexible (and secure!)
  • The build process for Daml client applications using the JavaScript/TypeScript tooling is being improved to remove the most common error scenarios
  • Daml’s error and execution semantics are being tidied up with a view towards improving exception handling in Daml
  • Daml will get a generic Map type as part of Daml-LF 1.9

Public Blockchains and Distributed Ledgers

I try hard to differentiate between what people mean when they talk about “blockchain”s. In my view calling a public blockchain and a distributed ledger by the same blanket term is about as informative as calling SQL, MongoDB, and an Excel spreadsheet a “database”. It’s technically correct but very uninformative. So I’d like to try my hand at clarifying this and exploring the benefits and trade-offs of each at a very high level.

What Do We Mean When We Talk About Blockchains?

Speaking broadly we have two relatively new types of technically similar, but operationally different databases, the public blockchain, and the distributed ledger. The difference between these two effectively reduces down to one critical component: 

How much privileged control do one or more entities have over their database?

So with this in mind let’s throw out the mislabeling and clarify some terms:

  • A public blockchain is any ordered database where it is expected that anyone would be able to read from or write to at a cost. Bitcoin being the most well known and popular example of this.
  • A distributed ledger is any private ordered database that is expected to have restrictions on who may read from or write to the database. Most commonly they’re restricted to those participating in the operation of the ledger. Fabric and Corda would be the most well known and popular examples of these.

These definitions gloss over other key differences but what we can say generally is 

  • A public blockchain is the most technically feasible attempt at providing equal world-wide access to a database, 
  • A distributed ledger is the most technically feasible attempt at providing privileged multi-party access to a database.

Why Would I Want All My Data To Be Public?

The advantage of a public blockchain is that any number of people who may or may not know or trust each other can use it to record data that is important to them. This often takes the form of transactions that have at least some, and often significant monetary value. Ultimately if you can afford to save data to this blockchain then you get to save data there with no other intermediaries in between.

This presents two tradeoffs:

  1. The whole world can see your data.
  2. It costs a direct monetary fee to use.

If you need the benefits of a public blockchain which are primarily a means to achieve a censorship and corruption resistant form of money then use it. If you don’t then you might want to look elsewhere.

I’d Like My Data To Be Private Please

Distributed ledgers look a lot like traditional SQL databases with one crucial difference:

Multiple users, entities, clients, and partners can all coordinate, share business requirements (represented as code), and agree on some or all of the contents of the database.

In the most egalitarian version it gives each of these participants equal say, but the actual structure can be whatever you imagine for your needs.

If you were to choose a distributed ledger over a SQL database it would involve some coordination between organizations on business requirements. At first blush this implies a higher workload but in reality it reduces workload by reducing what would be multiple separate infrastructures enforcing the same set of business requirements into one. Furthermore these infrastructures help to facilitate trust and strengthen relationships as all participants to your ledger can see and be certain of the parts relevant to them. This is a property wholly unavailable to traditional database architectures.

When compared to public blockchains, distributed ledgers are much more cost efficient to save data to (like any other private database) and the data within it is generally not accessible to the entire world. They also offer substantially more flexibility with upgrades and feature improvements to both your database and development stack. With a distributed ledger you only need to upgrade the nodes participating in your private network, not the entire world as you would need with a public chain.

So in conclusion public blockchains and distributed ledgers address vastly different concerns and are worth distinguishing between. Public blockchains are really good for exchanging value in some cases, while distributed ledgers are really good for managing business operations.

We would like to get your thoughts and opinions on the two: What are you using blockchain or distributed ledgers for? What other trade-offs do you see? Start a discussion with us here.

Join the Daml Forum

When Failsafes Fail and What to do About it

In common legacy systems supporting mission-critical networks that simply cannot go down, there’s often some form of failover system. In a recent real world case the Tokyo Stock Exchange experienced this type of outage, their primary system went down and that triggered a failover to a secondary system which also failed. In another the US’s 911 system had an hour-long outage in 14 different states. In today’s world these things happen, in tomorrow’s world, with the right technology, they don’t need to.

Two Eggs, One Basket

Running two systems in parallel for failover cases is common practice in legacy systems, but the nail-biting part for any IT engineer is the switchover; when primary goes down and secondary takes over. That transition represents a lot of risk and stress. Everyone holds their breath and breathes a sigh of relief when it’s over, or panics, loses a lot of money, and calls their significant other to say they’ll be late to dinner during a quick sprint across the server room floor.

More Baskets

Distributed ledgers present a solution to this problem by being in a constant state of failover. They run side-by-side continually ensuring that they’re operating properly with each other. There is no primary and secondary in a distributed ledger, instead there is node 1, node 2, node 3, etc. If node 1 were to experience a hardware issue and fail nodes 2, and 3 would continue their operations uninterrupted. IT would have time to diagnose node 1, bring it back up, the trading day would end, and everyone would go home to eat dinner on time.

There are two mechanisms by which these nodes prevent a failure – relay and consensus:

  1. When a node receives a new transaction, say a trade, it relays it immediately to the rest of the nodes in the network so that all nodes have seen the same transaction at around the same time.
  2. Every so often, say every few seconds, the nodes discuss the latest transactions sent to them and programmatically settle on the new set of data. They then continue on their merry way, receiving and processing transactions, and then settling up once again just a few moments later.

This is how distributed ledgers provide a level of fault tolerance not previously seen in legacy systems. Most importantly, these nodes can be distributed within an organization and geographically much like you’d do in legacy systems. The difference is when the data syncs.  Legacy systems sync up every so often, or not until a critical issue is encountered. Distributed ledgers, on the other hand, sync up continuously across the wire making sure they’re all on the same page. So you’re never putting all your eggs in one basket.

Distributed ledgers are easy to deploy. Try our demo in your web browser here:

Build, Deploy, and Run a Daml application