Interconnected /November



Unlock your ledger, free your business

The second installment of interconnected sees the second largest stock exchange group in the world choose Daml, the largest virtualization company in the world release their blockchain with support for Daml, a live demo of interoperability across two blockchains and a database, and much more.

As the world of blockchains, DLTs and even smart contracts on traditional databases moves quickly, you need to future proof your investments with a technology that can work with all of them. Daml enables application portability, so you don’t have to rewrite your application if you need to move to another platform. And coming soon, Daml will allow for interoperability across platforms…

Read the Full Newsletter 

Digital Asset Demos 4 Key Properties of Interoperability

During the OECD 2020 Global Blockchain Policy Forum, Digital Asset’s CEO ran a live demo showing how to integrate CBDCs across different blockchain and database platforms

Speaking at the OECD Blockchain Policy Forum in November, Digital Asset Co-founder & CEO, Yuval Rooz unveiled a new blockchain interoperability protocol that enables applications to seamlessly interact across different platforms. During the session – “Interoperability in DLT – What does it mean? Is it important? What are the options?” – Rooz gave a live demo showing how wholesale and retail applications of Central Bank Digital Currencies (CBCDs) can interoperate across Hyperledger Fabric, Ethereum and a traditional Postgres database, making CBDCs compatible regardless of the underlying technology

As Central Banks move away from experimenting with permissionless public blockchains towards production deployments on networks designed for the enterprise, the importance of interoperability has only increased.

“To unleash the full potential of CBDCs and DLT, interoperability is a must,” said Rooz. “The reality is that there is not going to be one chain or one ledger to rule them all. The other challenge is that not every use case is going to be suitable for DLT. There will be solutions running on traditional databases. If we don’t solve for interoperability now, all we’re doing is re-creating the problem with slightly bigger data silos using newer technology.”

Rooz identified 4 key components of achieving true interoperability and why these 4 are vital for any central bank or company considering going live with any asset on DLT.

Rooz added, “Interoperability is a necessity and must include atomicity, privacy, cross-ledger technology and extensibility. With these four components we can realize the full potential of DLT and new solutions, like CBDCs.”

During the session, Rooz also announced that Digital Asset will open source the code used to build the CBDC interoperability functionality “CBDCs must be open source,” said Rooz. “Everyone needs to see these implementations. We have a lot more functionality to share and we want to make it collaborative.” To be notified once the application is open source, sign up here.

The Global Blockchain Policy Forum took place on November 16 -20, 2020. It is the leading international event focused on the policy implications of blockchain and its applications, led by the OECD’s Blockchain Policy Centre. The event brings together top government officials, policy advisors, central bankers, academics and more.

Zooming in on Daml’s performance

tl;dr If you care about performance, use Daml’s builtin syntax for accessing record fields.


I guess it is no secret that I’m not the biggest fan of lenses, particularly not in Daml. I wouldn’t go as far as saying that lenses, prisms, and the other optics make my eyes bleed but there are definitely better ways to handle records in most cases. One case where the builtin syntax of Daml has a clear advantage over lenses is record access, getting the value of a field from a record:


It doesn’t get much clearer or much shorter. No matter what lens library you use, your equivalent code will look something like

    get (lens @"field1" . lens @"field2") record

or maybe

    get (_field1 . _field2) record

if you’re willing to define lenses like _field1 and _field2 for each record field. Either way, the standard Daml way is hard to beat. And if you need to pass a field accessor around as a function, the (.field1) syntax has you covered as well.

Clarity is often in the eye of the beholder and short code is not per se good code. The only thing that seems to matter universally is performance. Well, let’s have a look at the performance of both styles then. If you want to play along with the code in this blog post, you can find a continuous version of it in a GitHub Gist.

Martin: I wouldn’t go as far as saying that lenses, prisms, and the other optics make my eyes bleed but there are definitely better ways to handle records is most cases

van Laarhoven lenses

Before we delve into a series of benchmarks, let’s quickly recap van Laarhoven lenses. The type of a lens for accessing a field of type a in a record of type s is

    type Lens s a = forall f. Functor f => (a -> f a) -> (s -> f s)

If it wasn’t for everything related to the functor f in there, the type would be the rather simple (a -> a) -> (s -> s). This looks like the type of a higher-order function that turns a function for changing the value of the field into a function for changing the whole record. That sounds pretty useful in its own right and is also something that could easily be used to implement a setter for the field: just use \_ -> x as the function for changing the field in order to set its value to x. Alas, it seems totally unclear how we could use such a higher-order function to produce a getter for the field. That’s where f comes into play. But before we go into the details of how to get a getter, let’s implement a few lenses first.

Given a record type

    data Record = Record with field1: Int; ...

a lens for field1 can be defined by

    _field1: Lens Record Int
    _field1 f r = fmap (\x -> r with field1 = x) (f r.field1)

In fact, this is the only way we can define a function of this type without using any “magic constants”. More generally, a lens can be made out of a getter and a setter for a field in a very generic fashion:

    makeLens: (s -> a) -> (s -> a -> s) -> Lens s a
    makeLens getter setter f r = setter r <$> f (getter r)

Again, this is the only way a Lens s a can be obtained from getter and setter without explictly using bottom values, such as undefined.

Using Daml’s HasField class, we can even produce a lens that works for any record field:

    lens: forall x s a. HasField x s a => Lens s a
    lens = makeLens (getField @x) (flip (setField @x))

The lens to access field1 is now written as

    lens @"field1"

Remark. In my opinion, the most fascinating fact about van Laarhoven lenses is that you can compose them using the regular function composition operator (.) and the field names appear in the same order as when you use the buildin syntax for accessing record fields, as indicated by the example in the introduction.

Implementing the getter

That’s all very nice, but how do we actually use a lens to access a field of a record? As mentioned above, that’s where the f comes into play. What we are looking for is a function

    get: Lens s a -> s -> a
    get l r = ???

Recall that the type Lens s a has the shape forall f. Functor f => .... This means the implementation of get can choose an arbitrary functor f and an arbitrary function of type (a -> f a) and pass them as arguments to l. The functor that will solve our problem is the so-called “const functor” defined by

    data Const b a = Const with unConst: b

    instance Functor (Const r) where
        fmap _ (Const x) = Const x

The key property of Const is that fmap does pretty much nothing with it (except for changing its type). We can use this to finally implement get as follows:

    get: Lens s a -> s -> a
    get l r = (l Const r).unConst

With this function, we can get the value of field1 from an arbitrary record r by calling

    get (lens @"field1") r

For the sake of completeness, let’s quickly define a setter as well. As insinuated above, we don’t really need the f for that purpose. That’s exactly what the Identity functor is for:

    data Identity a = Identity with unIdentity: a

    instance Functor Identity where
        fmap f (Identity x) = Identity (f x)

    set: Lens s a -> a -> s -> s
    set l x r = (l (\_ -> Identity x) r).unIdentity

How to micro-benchmark Daml

Micro-benchmarking Daml code is unfortunately still a bit of an art form. We will use the scenario-based benchmarking approach described in a readme in the daml repository. To this end, we need to write a scenario that runs get (lens @"field") r for some record r. The benchmark runner will then tell us how long the scenario runs on average.

In order to write such a scenario, we need to take quite a few things into consideration. Let’s have a look at the code first and explain the details afterward:

    records = map Record [1..100_000]         -- (A)

    benchLens = scenario do
        _ <- pure ()                          -- (B)
        let step acc r =
                acc + get (lens @"field1") r  -- (C)
        let _ = foldl step 0 records          -- (D)
        pure ()

The explanations for the marked lines are as follows:

  • (D) Running a scenario has some constant overhead. By running the getter 100,000 times, we make this overhead per individual run of the getter negligible. However, folding over a list has some overhead too, including some overhead for each step of the fold. In order to account for this overhead, we use a technique that could be called “differential benchmarking”: We run a slightly modified version of the benchmark above, where line (C) is replaced by acc + r and line (A) by records = [1..100_00]. The difference between both benchmarks will tell us how long it takes to execute line (C) 100,000 times.
  • (A) In order for the differential benchmarking technique to work, we need to compute the value of records outside of the actual measurements since allocating a list of 100,000 records takes significantly longer than allocating a list of 100,000 numbers. To this end, we move the definition of records to the top-level. The Daml interpreter computes top-level values the first time their value is requested and then caches this value for future requests. The benchmark runner fills these caches by executing the scenario once before measuring.
  • (B) Due to the aforementioned caching of top-level values and some quirks around the semantics of do notation, we need to put our benchmark after at least one <- binding. Otherwise, the result of the benchmark would be cached and we would only measure the time for accessing the cache.
  • (C) We put the code we want to benchmark into a non-trivial context to reflect its expected usage. If we dropped the acc +, then get (lens @"field1") r would be in tail position of the step function and hence not cause a stack allocation. However, in most use cases the get function will be part of a more complex expression and its result will be pushed onto the stack. Thus, it seems fair to benchmark the cost of running the get plus the pushing onto the stack. The additional cost of the addition is removed by the differential benchmarking technique.

First numbers

Recall that the objective of this blog post is to compare lenses to the builtin syntax for accessing record fields in terms of their runtime performance. To this end, we run three benchmarks:

  1. benchLens as defined above,
  2. benchNoop, the variant of benchLens described under (D) above,
  3. benchBuiltin, a variant of benchLens where line (C) is replaced by acc + r.field1.

If T(x) denotes the time it takes to run a benchmark x, then we can compute the time a single get (field @"field1") r takes by

    (T(benchLens) - T(benchNoop)) / 100_000

Similarly, the time a single r.x takes is determined by

    (T(benchBuiltin) - T(benchNoop)) / 100_000

Running these benchmarks on my laptop produced the following numbers:

xT(x)(T(x) - T(benchNoop)) / 100_000
benchNoop11.1 ms
benchLens188.7 ms1776 ns
benchBuiltin15.7 ms46 ns

Benchmarks of the polymorphic lens and the builtin syntax.

Wow! That means a single record field access using the builtin syntax takes 46 ns whereas doing the same with get and lens takes 1776 ns, which is roughly 38 × 46 ns. That is more than 1.5 orders of magnitude slower!

Why are lenses so slow as getters?

This is almost a death sentence for lenses as getters. But where are these huge differences coming from? If we look through the definitions of lens and get, we find that there are quite a few function calls going on and that the two typeclasses Functor and HasField are involved in this as well. Calling more functions is obviously slower. Typeclasses do have a significant runtime overhead in Daml since instances are passed around as dictionary records at runtime and calling a method selects the right field from this dictionary.

If we don’t want to abandon the idea of van Laarhoven lenses, we cannot get rid of the Functor typeclass. But what about HasField? If we want to be able to construct lenses in a way that is polymorphic in the field name, there’s no way around HasField. However, if we were willing to write plenty of boilerplate like the monomorphic _field1 lens above, we could do away with HasField. Benchmarking this approach yields:

xT(x)(T(x) - T(benchNoop)) / 100_000
benchMono92.5 ms814 ns

Benchmark of the monomorphic _field1 lens.

Accessing fields with monomorphic lenses is twice as fast as with their polymorphic counterparts but still more than an order of magnitude slower than using the builtin syntax. This implies that no matter how much better we make the implementation of lens, even if we used compile time specialization for it, we wouln’t get better than a 17x slowdown compared to the builtin syntax.

Temporary stop-gap measures

If a codebase is ubiquitously using lenses as getters, then rewriting it to use the builtin syntax instead will take time. It might make sense to replace some very commonly used lenses with monomorphic implementations. Although, in a codebase defining hundreds of record types, each of them with a few fields, there is most likely no small group of lenses whose monomorphization makes a difference.

Fortunately, there’s one significant win we can achive without changing too much code at all. The current implementation of lens is pretty far away from the implementation of _field1. If we move lens closer to _field1, we arrive at

    fastLens: forall x r a. HasField x r a => Lens r a
    fastLens f r = fmap (\x -> setField @x x r) (f (getField @x r))

Benchmarking this implementation gives us

xT(x)(T(X) - T(benchNoop)) / 100_000
benchFastLens128.9 ms1178 ns

Benchmark of the polymorphic fastLens.

These numbers are still not great, but they are at least a 1.5x speedup compared to the implementation of lens.

Chains of record accesses

So far, our benchmarks were only concerned with accessing one field in one record. A pattern that occurs quite frequently in practice are nested records and chains of record accesses, as in


With the builtin syntax, every record access you attach to the chain is as expensive as the first record access. Benchmarks confirm this linear progression. However, we could easily make every record access after the first one in a chain significantly faster in the Daml interpreter.

There’s a similar linear progression when using get and fastLens. Unfortunately, we have no chance of optimizing chains of record accesses in any way since they are completely intransparent to the compiler and the interpreter.


I think the numbers say everything there’s to say:

MethodTime (in ns)Slowdown vs builtin
monomorphic lens81417.7x
polymorphic lens117825.6x

Summary of the benchmarks.

In view of these numbers, I would recommend to everybody who cares about performance to use Daml’s builtin syntax for accessing record fields!

Only focussing on getters might be modestly controversial since lenses also serve a purpose as setters. I expect the differences in performance between Daml’s builtin syntax for updating record fields

    r with field1 = newValue

and using set and a lens to be in the same ballpark as for setters when updating a single field in a single record. When updating multiple fields in the same record, the Daml interpreter already performs some optimizations to avoid allocating intermediate records. Such optimizations are impossible with lenses.

However, when it comes to updating fields in nested records, Daml’s builtin syntax is not particularly helpful:

    r with field1 = r.field1 with field2 = newValue

It gets even worse when you want to update the value of a field depending on its old value using a function f. In many lens libraries this function is called over and can be used like

    over (lens @"field1" . lens @"field2") f r

Expressing the same with Daml’s builtin syntax feels rather clumsy

    r with field1 = r.field1 with field2 = f r.field1.field2

If we ever want to make lenses significantly less appealing in Daml than they are today, we need to innovate and make the builtin syntax competitive when it comes to nested record updates. Who would still want to use lenses if you could simply write

    r.field1.field2 ~= f

in Daml?


Daml has also a new learn section where you can begin to code online:

Learn Daml online

HKEX Connects Hong Kong and Mainland China Markets with Daml

The Hong Kong Stock Exchange to introduce Synapse, a settlement acceleration platform for Northbound Stock Connect, in collaboration with Digital Asset and the DTCC.

As capital markets strive to create more efficiencies across post-trade processing, Daml smart contracts are emerging as the go to standard to automate the entire process.

Today’s process is cumbersome, dated and difficult to track. To start, asset managers must first decide how to allocate each trade across their funds, and that information needs to be matched with their broker and sent on to other service providers and exchange members such as custodians, clearing members and the exchange’s clearing house. Currently, this is done through bilateral communications across a variety of platforms. Each participant is required to check and enrich data about the trade, matching and reconciling it with their counterparties before instructing the next participant in the chain – eventually producing final settlement instructions to the exchange.

We are pleased to share that the Hong Kong Exchanges & Clearing Limited (HKEX) has announced  its plans to launch HKEX Synapse, a new settlement acceleration platform for its landmark Stock Connect program. The new integrated platform, powered by Daml smart contracts, eliminates sequential processes and offers a single source of truth for the settlement of securities. It produces accurate and timely status updates. And, integration with DTCC’s Institutional Trade Processing services, such as CTM, now enables international investors to automate and expedite the settlement process. 

Commenting on the news, HKEX Chief Executive, Charles Li said:

“Embracing new technology to further develop our markets is a cornerstone of our strategy and we are delighted to work together with DTCC and Digital Asset on this exciting new enhancement to our landmark mutual access programme with Mainland China.”

Since its launch, institutional investors’ interest and participation in Northbound Stock Connect has grown significantly, especially following the inclusion of China’s A-shares in major global indices. As of 30 September 2020, Stock Connect’s Northbound average daily turnover in the first three quarters this year has more than doubled from the same period of 2019, to a record RMB 90 billion.

Mainland China’s tight settlement cycle has created the need for a more efficient settlement infrastructure, and HKEX Synapse will address this, helping investors to manage their portfolios and risks.

HKEX Synapse is an optional platform, and is expected to begin testing in 2021 with a group of pilot users, ahead of production deployment targeted for Q1 2022.

Click here to read the full press release

VMware Blockchain with Daml is Now Available

Deploy mission-critical decentralized applications across an enterprise-grade blockchain platform trusted by the world’s largest organizations

Today, our technology partner VMware announced commercial availability of VMware Blockchain 1.0! Digital Asset and VMware have been working closely together for over two years to create a deep integration between VMware Blockchain and Daml smart contracts. To coincide with their release, we are also making this integration commercially available as the Daml Driver for VMware Blockchain.

We have long believed that blockchain technology will not be truly ready for the mainstream until we see less competition and more collaboration within the blockchain technology stack. By each focusing on our area of expertise; VMware on enterprise-grade, scalable infrastructure, and Digital Asset on distributed application language design capable of meeting the complexities of modern markets, VMware Blockchain with Daml represents the best of both companies in a tightly integrated offering.

It’s this combination that has been selected by both the Australian Securities Exchange (ASX) and Broadridge Financial Solutions to underpin the way $1 trillion dollars of stocks change hands in the 9th largest economy in the world and to streamline workflows for $4.5 trillion dollars of repurchase agreements respectively.

With Daml quickly becoming the standard for mission critical smart contract development, the Daml Driver for VMware Blockchain provides enterprises with a simple way to deploy multi-party applications on a platform developed by a world leading organization they already know and trust.

Learn More about VMware Blockchain with Daml

VMware Blockchain with Daml provides businesses with a decentralized platform to unlock data silos and free up data to flow securely, privately and instantaneously with the high availability and performance that meets the most stringent application requirements of mission-critical distributed workloads.

Additional features include:

Integration with existing systems using SDDC infrastructure with the ability to scale across hybrid, public, and private cloud environments.

Day-2 operations with application management and monitoring built into the platform for simple provisioning, comprehensive metrics and logs, and 24×7 global support, including VMware support for the Daml runtime.

VMware Blockchain’s Scalable Byzantine Fault Tolerance (SBFT) consensus mechanism that ensures integrity of data in the blockchain while protecting against false and malicious actors.

Sub-transaction level privacy at the infrastructure level with data distribution rules automatically determined by the Daml runtime.

Now, businesses across all industries have the ability to leverage this powerful combination of smart contracts and blockchain technology from Digital Asset and VMware Blockchain. Start unlocking innovation today with automated multi-party workflows through Daml. Use Daml for VMware Blockchain to deploy production-ready applications on an enterprise blockchain platform trusted by the largest enterprises to support their mission-critical workloads.

Learn more about the Daml Driver for VMware Blockchain by Digital Asset here.

Unlock Developer Productivity Without Getting Locked-in

This blog is the first of a three part series focusing on the power of Daml and distributed ledger technology.

Decentralized applications and distributed ledger technology (DLT) are increasing in adoption as more businesses aim to automate operational inefficiencies and leverage resources for revenue-generating activities. Smart contracts are a great tool to support decentralized workflows and simplify complex multi-party transactions; however, many smart contract solutions require a large upfront investment in blockchain technology. This is a major concern for many organizations.

Having the application framework tied to the infrastructure greatly impedes the adoption of new technology. What happens if you start with one blockchain platform only to find out during the application development cycle that a different DLT platform better addresses your non-functional requirements? Many organizations also have significant investments in both time and outstanding software licenses for their existing infrastructure that must be leveraged, limiting which frameworks developers can use for new projects. This means multi-party workflow projects are either deferred until a later time or must be rewritten when migrated from traditional infrastructure to DLT platforms. Whether you buy a new DLT platform or use your current system, most businesses find themselves inevitably locked in to their infrastructure.

Daml is the only vendor agnostic and interoperable smart contract language that simplifies multi-party workflows and improves operational efficiency across parties with rights, obligations, and authorization built into the code. The Daml ecosystem also provides a suite of integrations called Daml Drivers that enable businesses to deploy applications across a variety of distributed ledgers and databases. The same Daml application can be migrated to a Daml enabled ledger with no code changes. Essentially, with Daml, the underlying infrastructure can change as business requirements evolve. This means developers and business analysts no longer need to worry about code rewrites, evolving roadmaps, and proving the value of underlying infrastructure. Instead, developers can focus on creating new value for their business.

Learn More about Ledgers Unlocked

Here is how Daml-driven applications unlock developer productivity for greater innovation across the organization:

Daml code is completely decoupled from the underlying ledger, abstracting away the complexity of DLT. This enables developers to focus only on the application’s business logic with the freedom to deploy anywhere Daml is supported.

All Daml-driven applications are ledger agnostic, i.e. the application will not break nor require a code rewrite when migrated to new infrastructure (DLT or database). Truly a write once, deploy anywhere model.

Daml Driver license entitlements provide the freedom to move your application to any supported ledger or database using the same Daml Driver license entitlement. This means businesses can redeploy to a different Daml-enabled ledger if needs change while continuing to use their existing license from Digital Asset.

Ledger lock-in is a painful reality for businesses evaluating blockchain technology. As you start to build new solutions for multi-party applications, only Daml future proofs the application from an evolving ledger landscape.

Be sure to check out the Ledgers Unlocked Program, Digital Asset’s solution for businesses seeking to build applications across various stacks without license restrictions, and learn how you can leverage Daml Drivers on CordaVMware Blockchain, and PostgreSQL without experiencing ledger lock-in.

representative customer experience flow (enables a single golden source of data while maintaining privacy - eliminating data reconciliation and mismatch, allowing a common view of the business process, making compliance and audit easy)

Digital Customer Experiences Using Smart Contracts – Part 2

In my last blog on Enhancing Digital Customer Experiences Using Smart Contracts, we looked at how customer preferences management can be dramatically simplified using smart contracts. A smart contracts based approach avoids customer preferences management to be treated as an add-on or external database (even if it physically is). This avoids costly reconciliations and process breaks due to data mismatches.  

Today I’m going to build on that premise and discuss how we can streamline the management of customer preferences across multiple companies. This will lead to creation of cross-industry and cross-company customer experiences, while improving operational efficiency and promoting customer privacy. This is different from a business process such as supply chain or trade finance being executed across companies. Our focus here will be on digital and personalized customer experiences.

The motivation for this post is quite simple. Customers are demanding extreme personalization, the digital revolution is causing tough competition on pricing so differentiating and delivering value through engagement is paramount, and time to market for innovation is becoming a critical enabler of business success or even survival. The collaboration model using Daml outlined in this post can help meet all these goals. 

This post is relevant for those who are presenting a business partnership to customers (e.g. co-branded credit cards, airline loyalty programs). This post is also relevant to those who would like to present such a face to their customers but have been unable to do so because of the associated technological and business process complexity  (e.g. multiple complementary retailers as in the ill fated Amex Plenti program).

Examining the problem

In the previous blog post, we considered the example of a credit card company managing customer preferences across its card products and business divisions. 

This time let’s take the example of a fitness center and an insurer who would like to create mutual business benefits:

  • Less insurance claims ( insurer)
  • More and longer fitness subscriptions (for fitness center)

At the same time, this partnership benefits the customers in multiple ways:

  1. Discounts on insurance
  2. Personalized fitness goals 
  3. Discounts on fitness subscriptions.
  4. Seamless subscriptions to 3rd party services such as supermarkets, nutritionists etc

I’ve written about this actual business partnership before if you are interested to learn more. These kinds of customer experience ecosystems will become a norm of the future, so in this post I’m describing how we could accomplish this more simply using Daml, remove reconciliations, and also make this a scalable ecosystem whose membership can grow or shrink on demand.

The first couple of problems you encounter when thinking about such an ecosystem can almost cause you to give up on this ambition. So far there hasn’t been a way to do this using technology that is specifically meant to solve exactly these problems. 

  1. Inviting independent participants to an ecosystem 
  2. Allowing customers to receive uniform experiences as they engage across this ecosystem
  3. Maintaining privacy of customer data between participants 
  4. Eliminating back and forth data transfer and reconciliation between participants

But lets see how a Daml enabled ecosystem can make this a breeze while maintaining privacy and confidentiality of each party without having to do anything more like the below. The overall business process view looks like the below:

representative customer experience flow (enables a single golden source of data while maintaining privacy - eliminating data reconciliation and mismatch, allowing a common view of the business process, making compliance and audit easy)
Representative customer experience flow (enables a single golden source of data while maintaining privacy – eliminating data reconciliation and mismatch, allowing a common view of the business process, making compliance and audit easy)

Onboarding Participants

We have 3 types of participants at least but of course we can make this solution extensible if it is intended to be a platform. 

Customers can be registered from a campaign by creating a customer registration record that they can accept. That allows us to assign other parameters on the underlying customer record. I’ve assumed a single insurer and a single fitness center at this time. I’ve assumed that someone has been designated with maintaining the ecosystem and deciding who should be onboarded. In a closed network, this role can be assumed by either a consortium or one of the participants. The model can also be made more generic to handle different types of entities.

Note on enterprise adoption: The Daml smart contracts based workflow outlined below is exposed through standard REST APIs. Current web, mobile and other applications can continue to function in the usual way with the underlying workflow and a golden source of data being managed by Daml as per the modeled privacy requirements.

Capturing and Sharing Fitness Visits

One of the goals of this partnership is to share healthy practices adopted by the customer with the insurer. This will allow the insurer to do things such as reducing premiums, reimbursing fitness center fees in part or full, and compute the business value of such a partnership. On the other hand the fitness center can do the same.

We do this by allowing the fitness center to send the information on customer visits to the insurer. This is done with customer consent of course. This consent can also be revoked at any time (not shown).

A single underlying customer record allows both insurers and fitness centers to do so. 

Personalizing the Fitness Experience

 Being part of an ecosystem, it is expected that the insurer may want to personalize the fitness goals that the customer should act upon. These goals could be preventative based on the customer’s risk profile, or required to achieve full recovery from something that has already occurred. In the past this has been laden with privacy risks, and data storage compliance issues. With Daml all that is simplified.

The customer will simply allow the insurer to attach already existing health goals. These goals can then be viewed by the trainer at the fitness center and deliver truly personalized fitness.  Such a model can also evolve into an accountability model for each party – customer, trainer, fitness center and insurer.

Expanding the ecosystem

As you may have guessed we can extend this ecosystem to other parties as well. For example, we may want to onboard nutritionists to help develop the health goals, or a supermarket to offer discounts on the right food etc. We may also onboard local marathons and other sports apparel retailers to further personalize this experience. 

In the end, not only does the customer win, but so does every party who is part of the ecosystem. We can say goodbye to blind mass campaigns, and truly personalize the experience through cross-industry customer journeys.

Daml has also a new learn section where you can begin to code online:

Learn Daml online

Simplifying Trade Processing with Daml (Part 2)

New E-book from IntellectEU and Digital Asset addresses the latest challenges around clearing and settlement

In collaboration with IntellectEU, we explored numerous ways distributed ledger technology (DLT) and Daml can transform securities services, including clearing and settlement, KYC processes, corporate actions and more. Through a series of blogs we are sharing our analysis, highlighting what DLT and Daml can do for you today. In this two part blog series exploring the challenges of clearing and settlement.

In the previous post, we discussed how you can simplify trade processing with Daml, now we’ll focus on optimization.

Another area in which Daml and DLT can optimize trade processing activities is in the context of simultaneous settlement, a workflow that would atomically settle inbound and outbound cash and securities movements. This would eliminate the need for central clearing parties to extend credit or incur additional risk. Through Daml and DLT, Daml structures are built to determine whether each participant to the transaction has sufficient assets to meet its obligations and process all component transactions simultaneously or not at all.

The use of Daml on distributed ledger technology not only simplifies the clearing and settlement trade processes, but it also enables capital market participants to envision a solution where beneficial ownership information is maintained without cumbersome reconciliation processes.

DLT on Daml solutions open additional synergies by design

Trade processing workflows (simultaneous settlement and committed settlement) maintain the legal and beneficial ownership data in real-time at settlement. In a market that builds such beneficial ownership ledger to track ownership titles, issuers would also be able to more easily connect and manage relationships with their investors. Through the use of Daml, this ledger would be able to interoperate with the KYC ledger to sync data living on that ledger.

If, for a given market, the central securities depository, the clearing house, and trading venue integrate on the same DLT infrastructure, different trade processing workflows can easily be designed and implemented to better fit the needs of various market participants. For example, a clearing house is no longer required if the traded assets are locked at order matching time before confirming the trade. The trade settlement cost will be reduced by the clearing house fee, there will be no more settlement risk and it will now be possible to settle the trade on the date of the trade.

In the end of the day, the immutable nature of DLT, along with its ability to create a secure yet logically shared environment in open or private networks, creates a system for financial transactions where multiple parties can engage simultaneously on processes, rather than waiting for each individual party’s centralized system to update. This eliminates duplicative operations while shortening the time between parties along the holding chain (a chain of “custody” service providers).

To learn more about the impact and opportunities of DLT and Daml in securities services – from account onboarding to post-trade settlement and treasury services – download a free copy of “Digitally Transforming Securities Services” E-book, co-authored with IntellectEU.

Download the eBook

Release of Daml SDK 1.7.0

Daml SDK 1.7.0 has been released on November 11th 2020. You can install it using:

daml install latest

If you’re using the Daml Triggers Early Access you’ll need to migrate to the new API. No other mandatory changes are required for this release but other impacts and migrations are detailed below.

Interested in what’s happening in the Daml community and its ecosystem? If so we’ve got a jam packed summary for you in our latest community update.


  • Daml Connect has been introduced as a logical grouping for all those components a developer needs to connect to a Daml network.
  • JSON API, Daml Script, and the JavaScript Client Libraries now support reading as multiple parties.
  • daml start can now perform code generation, and has a quick-reload feature for fast iterative app development
  • Support for multi-key/query streaming queries in React Hooks
    • New query functions accepting multiple keys supersede the old single-key/query versions, which are now deprecated.
  • Daml Triggers (Early Access) have an overhauled API that is more aligned with Daml Script
    • This change requires a migration detailed below.

Impact and Migration

  • The compiler now emits warnings if you use advanced and undocumented language features which are not supported by data-dependencies. This only affects you if you use language extensions not documented on or import DA.Generics. If you receive such warnings, it is recommended that you move off them. If you are getting unexpected warnings, or don’t know how to migrate, please get in touch with us via the public forum or (for registered users).
  • If you are using stream queries in the React Hooks, we recommend you migrate to the new multi-key versions. The migration is detailed below. The old functions are now deprecated, meaning they may be removed with a major release 12 months from now.
  • If you are using Daml Triggers, you’ll need to migrate them to the new API.

What’s New

Clearer Segmentation of the Daml stack with Daml Connect


With Release 1.6 work started to clarify the Daml stack in many respects: What it consists of, what state different pieces are in, and what stability and compatibility guarantees users can expect. However, the ecosystem overview page highlighted that there was a lack of good terminology of the sets of components users needed to either establish a Daml network, or to connect to a Daml network. Previously the label “SDK” was sometimes used to refer to the latter, but that was inaccurate since the SDK only contains components intended to be used at development time. With release 1.7 this has been tidied up with clearer terminology for different layers of the stack. Daml Connect describes all those pieces that are needed to connect to a Daml network:

  • Runtime Components like the JSON API
  • Libraries like the Java Ledger API bindings and JavaScript client library
  • Generated Code from the daml codegen commands
  • Developer tools like the IDE, collectively called the SDK

Specific Changes

The ecosystem overview docs page has been improved, and naming has been adjusted throughout docs and artifacts to incorporate these changes.

Impact and Migration

This change only affects documentation and help text. There are no API changes.

Multi-Party Read in JSON API, Script, and JS Client Libs


The Daml Ledger API has supported multi-party transaction subscriptions for a long time, but until now, the runtime components like JSON API, and Daml Script and REPL were only able to query as a single party. As a first step towards enhancing Daml’s capabilities in sharing data in more flexible ways than the observer concept, the runtime components have now gained features to query contracts on behalf of multiple parties at the same time. Since read-access via the JSON API is controlled using JWTs, any client of the JSON API, including the JavaScript client libraries, can profit from these improvements provided the token issuer is able to issue appropriate tokens.

Specific Changes

  • The JSON API now accepts tokens with multiple parties specified in the readAs field. In queries, contracts known to any of the known parties will be returned. In command submissions, the readAs field is ignored. actAs is still limited to a single party.
    • Before this change, the JSON API would accept tokens with only a readAs party, and no actAs party set for command submission endpoints. When working with unauthenticated ledgers, this could lead to successful command submission without an actAs party set. This bug has been fixed as part of this change. If you were previously exploiting this bug during development, please move the submitting party from readAs to actAs.
  • In Daml Script, the query, queryContractId, and queryContractKey now accept multiple parties using the IsParties abstraction used by signatory, observer, and other fields. Specifically this means they accept single parties, lists of parties, and sets of parties interchangeably. They will return all contracts for which any of the given parties is a stakeholder.

Impact and Migration

As mentioned above, a bug was fixed in which a command submission could be successful via the JSON API if only the readAs field in the JWT was set, and the underlying Ledger API was running in unauthenticated mode. This is no longer possible. The submitting party has to be set in the actAs field.

Better (re)build with daml start


daml start is the primary command allowing quick testing of a Daml application. It compiles Daml contracts, starts up a Sandbox and a Navigator, and runs initialization scripts. However, it didn’t run code-generation, nor did it have any functionality for re-building and deploying after a change, requiring developers to manually shut it down, re-generate code, and restart. With this release, that manual and error-prone process has been replaced with automatic refreshes including codegen.

Note that if you use Navigator, you might still want to do a full restart to clear old contracts and packages.

Specific Changes

You can now press ‘r’ (or ‘r’ + ‘Enter’ on Windows) in the terminal where daml start is running to rebuild the DAR package and generate JavaScript/Java/Scala bindings and upload the new package to the Sandbox. This frees the user from killing and restarting daml start.

The daml start now runs all the code generators specified in the daml.yaml project configuration file under the codegen stanza. This frees the user from having to do so manually on every change to the Daml model.

For example, instead of running the JavaScript codegen manually as part of the create-daml-app, it is now included in the daml.yaml.

Before: Manual run of

daml codegen js .daml/dist/create-daml-app-0.1.0.dar -o daml.js

After: stanza in daml.yaml

Impact and Migration

This is a purely additive change.

API Alignment of Triggers (Early Access)


The Daml API of Triggers has been brought further inline with those of Daml Script and REPL, at the same time allowing for more efficient implementation of high-level triggers under the hood. Before, all the information consumable by the trigger rule, initialization, and update functions was passed in as function arguments. With this iteration, state, active contracts, and time are accessible using actions (using <- notation) instead.

Specific Changes

  • Trigger updateState, rule, and initialize functions no longer accept an ACS argument; instead, they must use the query action to query the active contract set, similar to the same function in Daml Script. See issue #7632.
  • Instead of taking a state argument, the TriggerA action now has functions to get, put and modify user defined state.
  • The Time argument was removed from the trigger rule function; instead, it can be fetched within the TriggerA do block by getTime, as with Update and Script.
  • The “commands in flight” of type Map CommandId [Command] argument has been removed from high-level trigger rule functions; instead, the current commands-in-flight can be retrieved with the new getCommandsInFlight function.  See issue #7600.
  • initialize is now a TriggerInitializeA action, which is able to query the ledger using query and related functions, and returns the initial state.
  • The updateState function now takes just a message and returns a TriggerStateA which allows querying the ledger using query and related functions, and modifying state using the get, put, and modify functions. See issue #7621.
  • Two new functions are available for querying the ledger: queryContractId, for looking up a contract by ID, and queryContractKey for looking one up by key. See issue #7726.

Impact and Migration

1. To migrate an existing initialize function acs -> expr, write a do block with query taking the place of any getContracts occurrences. For example

For triggers without custom state (ie s == ()), pure () is the right expression for initialize.

2. To migrate an existing updateState function, deal with ACS access as in 1. In addition, replace the state argument by calls with a call to modify.

For triggers without custom state (ie s == ()), \msg -> pure () is the right expression for initialize.

3. To migrate an existing rule function, deal with ACS access as in 1., replace the state argument with a call to get, and in addition replace time and commands-in-flight access using the new accessor functions.
time, commandsInFlight, and state are each optional; if they are not used, the lines that “get” them may be removed.

React Hooks Multi-query streams


Since version 1.6 the JSON API and JavaScript client libraries have had support for stream requests with multiple keys or queries. As of 1.7, this functionality is also available in the React Hooks, allowing for more efficient binding of ledger data to UI elements.

Specific Changes

  • The React bindings now expose the recent addition of multi-key and multi-query streams, with the new  useStreamQueries and useStreamFetchByKeys hooks in @daml/react mapping to, respectively, streamQueries and streamFetchByKeys in @daml/ledger.
  • The singular versions are marked as deprecated as they have become redundant.

Impact and Migration

The change is fully backwards compatible as the old singular functions are merely deprecated, not removed. The below describes the upgrade path for the new multi-query versions.

Upgrading useStreamQuery is straightforward: the query factory remains optional, but if specified it should return an array of queries instead of a single query. The array may be empty, which will return all contracts for that template, similar to not passing in a query factory. The return values of useStreamQuery and useStreamQueries have the same type.

Upgrading useStreamFetchByKey is only slightly more involved as the return type of useStreamFetchByKeys is different,  called FetchByKeysResult instead of the existing FetchResult. FetchByKeysResult differs from FetchResult in that it contains a contracts field with an array of contracts instead of a singular contract field. It differs from QueryResult in that each element of the returned array can also be null, if there is no corresponding active contract. Call sites can be updated as follows:

Minor Improvements

  • The Ledger API is now independently versioned instead of inheriting its version from the release of the Integration Kit components. This allows for cleaner cross-compatibility guarantees as described in the compatibility documentation. Every participant node from this release onwards advertises its Ledger API version via the version service
  • The daml ledger list-parties command can now query the ledger for known parties via the HTTP JSON API instead of the gRPC API. This requires setting the --json-api flag.
  • The JSON API’s JDBC url can now also be specified via an environment variable set with the CLI flag --query-store-jdbc-config-env.
  • The JSON API has gained /livez and /readyz health check endpoints for easier integration with k8s and other schedulers.
  • The JavaScript client library’s stream queries useStreamQueries and useStreamFetchByKeys as well as their (deprecated) singular counterparts now accept an optional closeHandler callback, which will be called if the underlying WebSocket connection get closed due to an error or because close was called.

    The same functions previously logged every close event as an error. However, there are legitimate cases for the connection to be closed (e.g. the component has been unmounted). The default behaviour will now be to log only unexpected disconnects and be silent on deliberate connection closes. This can be customized using the closeHandler.
  • The JavaScript client library’s reconnectThreshold can now be configured through the LedgerProps of the React wrapper.
  • The Standard Library’s DA.Date type now has Enum and Bounded instances allowing the use of ranges. Eg
    -- Create a list of all Sundays in 2020
    [date 2020 Jan 5, date 2020 Jan 12 .. date 2020 Dec 31]
  • Performance improvements in the Daml Engine when using typeclasses.
  • From this release, the digitalasset/daml-sdk docker image on DockerHub will be signed.

Bug Fixes

  • The Daml compiler shows the correct column numbers in error locations produced by command line tools like daml build.
  • The Daml Engine now properly enforces that the list of maintainers is non-empty. Before this version, it was possible on some Daml Ledgers to create contracts with empty maintainer lists. Such contracts can still be used when referenced by ContractId, but none of the key operations will work, and no new contracts with an empty maintainer list can be created.
  • A bug in the JavaScript Client Libraries was fixed, which caused the useStreamFetchByKeys hook to sometimes report a “ready” state (i.e. loading: false) even though the underlying connection had not yet been fully established.
  • A bug in Daml REPL was fixed, which caused errors about ambiguous names, even if names were fully qualified.
  • The compiler now writes the proper SDK version in the DAR manifest for snapshot releases instead of a sanitized version.
  • daml ledger upload-dar now exits with a non-zero exit code on failures and no longer prints “DAR upload succeeded” in error cases. This was a regression.
  • Contract Key lookup mismatches are now consistently reported as Inconsistent rather than Disputed. Sandbox-classic, in particular, previously reported Disputed, which implies malicious intent or malfunctioning of the submitter.

Daml Driver for PostgreSQL

  • New metrics tracking the pending submissions and completions on the CommandService have been added. Check out the monitoring section in the documentation for more details. The new metrics are
    • daml.commands.<party_name>.input_buffer_size
    • daml.commands.<party_name>.input_buffer_saturation
    • daml.commands.<party_name>.max_in_flight_size
    • daml.commands.<party_name>.max_in_flight_saturation
  • Add new metrics for measuring the number of concurrent command executions. The metrics are:
    • daml.commands.submissions_running
    • daml.execution.total_running
    • daml.execution.engine_running

Integration Kit

  • The Ledger API test tool’s --ledger-clock-granularity option now takes a time duration (e.g. “10s” or “5m”), rather than an integer number of milliseconds.
  • The Ledger API test tool now defaults to a ledger clock granularity of 1 second, not 10s, in line with most ledger implementations.
  • The Ledger API test tool has a new command line argument skip-dar-upload. See docs.
  • The Integration Kit’s ResourceOwner type is now parameterized by a Context, which is filled in by the corresponding Context class in the ledger-resources dependency. This allows us to pass extra information through resource acquisition.
  • The kvutils have a new metric daml.kvutils.committer.package_upload.validate_timer to track package validation time.
  • The Ledger API Server’s previously hardcoded timeout for party allocation and package uploads can now be configured via ParticipantConfig and the default value is now set to 2 minutes. See issue #6880.