Identifying the Right Technology for Your Multiparty Business Processes

Blockchain Technology Partners offers infrastructure choices for distributed, multiparty workflows. In this guest blog, Csilla Zsigri, VP, Marketing and Strategy at Blockchain Technology Partners explains the suitability of the various technology options.

Identifying the right technology for digitizing processes that involve multiple parties within and across organizations, has plagued businesses for decades. Information technology and operations executives are looking for the right technology to use for business-critical applications involving both trusted and untrusted parties. 

To help companies select a suitable technology for their multiparty workflows and distributed applications, we have created a simple decision tree, with three key questions to consider:

  1. Do you know and trust the participants in your business network?
  2. Does your business network have or need a trusted operator? 
  3. Do you need an immutable audit trail for your business process?

Let’s go through these questions, one by one.

Do you know and trust the participants in your business network?

If the answer to this question is ‘No,’ and your company seeks to interact and transact more efficiently – by eliminating frictions – with multiple untrusted organizations; a permissionless blockchain implementation will enable you and the other members in your business network to share information and collaborate securely, without a central authority, and with none of the individual parties having the ability to one-sidedly enforce decisions, either accidentally or in bad faith. 

Blockchain Technology Partners’ infrastructure management offering supports Hyperledger Besu, a core ledger protocol that combines both permissionless and permissioned features. Hyperledger Besu is essentially an Ethereum client that supports various consensus mechanisms, and its permissioning schemes were designed to be used in consortium environments. 

If the answer to this question is ‘Yes,’ then you should move on to the second question in the decision tree.

Does your business network have or need a trusted operator?

If the answer to this question is ‘No,’ you may consider the implementation of a permissioned blockchain network for running your multiparty business process. Hyperledger Sawtooth, in particular, offers a flexible and modular architecture, and supports various consensus mechanisms and smart-contract languages. Blockchain Technology Partners has released a freely available, enterprise-grade distribution of Hyperledger Sawtooth – dubbed BTP Paralos – ideal for production and business-critical environments. 

If the answer to this question is ‘Yes,’ then you should move along to the third and final question in the decision tree. 

Do you need an immutable audit trail for your business process?

If the answer to this question is ‘Yes,’ you may consider using a blockchain-powered distributed database technology that provides data integrity alongside privacy. Amazon QLDB, in particular, was designed to support transaction immutability offered by blockchain technology, yet it provides a centralized model to ensure data privacy.

For use cases where immutability is not required, but the ability to automate multiparty workflows is desired, a relational database such as Amazon Aurora coupled with a smart-contract capability such as Daml at the application layer, may be a suitable choice. 

When we are done with the decision tree…what then?

Overcoming shortages in IT skills and resources is one of the key challenges associated with digital transformation overall, and distributed ledger technology is no exception. With Sextant, Blockchain Technology Partners offers to free organizations from all the infrastructure pain involved with setting up and running a blockchain network, to ultimately enable them to build distributed applications and multiparty systems with ease. Distributed ledgers currently supported by Sextant include Hyperledger Besu and Hyperledger Sawtooth.

BTP’s Sextant also supports Daml, an open-source smart-contract programming language created by Digital Asset. Smart contracts have become known to the world as transaction protocols running on a blockchain or distributed ledger, embodying the self-enforcing business logic of a multiparty application or business process. Daml was purpose-built for coding complex multiparty business processes, and designed to work with different blockchains, distributed ledgers, as well as databases.

Blockchain Technology Partners and Digital Asset have teamed up to launch ‘Sextant for Daml,’ a joint, commercial offering that enables organizations to build and deploy smart contracts with little effort and no special expertise, on a variety of persistence layers. Sextant for Daml supports Hyperledger Besu, Hyperledger Sawtooth, as well as Amazon QLDB, Amazon Aurora and PostgreSQL.

For more information on Sextant, click here. Get in touch:

For more information on Daml, click here. Get in touch:

Release of Daml Connect 1.12.0

Daml Connect 1.12.0 has been released on Wednesday April 14th. You can install it using:

daml install latest

Want to know what’s happening more broadly among Daml developers? Check out the latest Daml Developer Monthly.


  • Daml projects can now depend on packages deployed to the target ledger.
  • The Daml StdLib contains a new module DA.List.BuiltinOrder, which provides higher-performance versions of several functions in DA.List by using a built-in order rather than user-defined ordering. We observed up to five-fold speedups in benchmarks.
  • The Daml assistant now supports the Daml Connect Enterprise Edition (DCEE), which comes with additional features:
    • The DCEE has an Alpha feature to validate and store signatures on command submissions. This provides non-repudiation in cases where the Ledger API is exposed as a service.
    • The DCEE contains a Profiler in Alpha which allows you to extract performance information for Daml interpretation.
Profiler output is easily visualized in tools like speedscope

Impact and Migration

Some of the minor changes and bugfixes may require small actions to migrate:

  • Daml Triggers now use DA.Map instead of DA.Next.Map in their API. 
  • If you were directly targeting .dalf packages using data-dependencies, you now need to add --package flags to make the package contents available.
  • The Scala bindings now depend on Scala 2.12.13 instead of 2.12.12
  • The compiler now produces Daml-LF 1.12 by default. If you want to stick to the old default, Daml-LF 1.11, please add the build-option --target=1.11.
  • Some gRPC status codes on rejected commands were changed from INVALID_ARGUMENT to ABORTED.

What’s New

Remote Dependencies


The data-dependencies stanza in daml.yaml project files lists the binary dependencies which the project relies on. Until now, these dependencies were file-based, requiring developers to specify a path to .dar files. Quite often the dependencies are already running on a ledger, in which case this required the developer to first get hold of a .dar file from the original source or by calling daml ledger fetch-dar. This new feature removes that extra step, allowing a Daml project to depend directly on a package already running on a ledger.

Specific Changes

Package names and versions, as well as package ID’s are allowed in the data-dependencies list of daml.yaml. These packages are fetched from the project ledger. The auto-generated daml.lock file keeps track of the package name/version to the package ID’s resolution and should be checked into version control of the project. For example, to depend on the package foo-1.0.0 running on a ledger on localhost:6865, your daml.yaml should contain:

  host: localhost
  port: 6865
- daml-prim
- daml-stdlib
- foo:1.0.0

Impact and Migration

This is a purely additive change.

High-Performance List Operations


A number of functions in the standard library for lists, DA.List, rely on the elements of the list being orderable. They use orderings defined in Daml through Ord instances and thus need Daml interpretation for every comparison. Since Daml-LF 1.11, Daml has a canonical inbuilt ordering on all values, which is considerably more performant than comparison on Daml-defined orderings. This allows the implementation of higher-performance versions of all of ordering-based algorithms. The new, high performance implementations are contained in a new module DA.List.BuiltinOrder.

As long as all orderings are derived automatically using deriving Ord, the results of the old and new implementations will agree, but the new version is substantially faster. If any values have user-defined Ord instances, these will be ignored by the new versions, hence the name BultinOrder.

Specific Changes

The Daml Standard Library has a new module DA.List.BuiltinOrder containing more efficient implementations of the sort*, unique*, and dedup* functions based on the builtin order. We saw up to five-fold speedups in our benchmarks.

Impact and Migration

This is a purely additive change.

If you don’t care about the actual ordering of values, but only that values are orderable for algorithmic use, we recommend switching to the new versions of the algorithms for performance reasons. 

Daml Assistant support for the Daml Connect Enterprise Edition


Daml Connect Enterprise Edition (DCEE) is a commercial distribution of Daml Connect containing additional features particularly useful in large or complex projects. DCEE components have been distributed via Artifactory for several releases now. With this release, the Daml Assistant also becomes aware of the Enterprise Edition and is able to manage additional developer tools and components not available in the Community Edition.

This release also contains two new features in Early Access for the DCEE, described in more detail below.

Specific Changes

The assistant now supports an artifactory-api-key field in daml-config.yaml. If you have a license for DCEE you can specify this and the assistant will automatically fetch the DCEE which provides additional functionality. See the installation documentation for more detail.

Impact and Migration

If you are an Enterprise Edition user, we recommend adding the artifactory-api-key field to your daml-config.yaml to benefit from the new features. If you already have the Community Edition installed, run daml install –force VERSION after setting the API key to replace it with the Enterprise Edition instead.

[Enterprise Edition] Daml Profiler in Alpha


For large and complex Daml solutions, the speed of Daml interpretation can have a significant impact on overall system performance. The Daml Profiler now provides a tool for the developers of Daml applications to extract the timing information of real-world workflows in the well-established Speedscope file format, and analyse the performance characteristics using standard tools.

Specific changes

The Daml Sandbox distributed as part of the Enterprise Edition of Daml Connect has a new command line flag --profile-dir. If set, the timings of the interpretation of every submitted command will be stored in a json file in the provided directory. These profiles are compatible with standard analysis tools like Speedscope.
Please refer to the documentation for a complete usage example.

Impact and MIgration

This is a purely additive change.

[Enterprise Edition] Non-Repudiation in Alpha


There are many scenarios in which the Daml Ledger API is offered as a service, and Daml fully supports such deployment topologies through its multi-tenancy participant node design. With this new feature, the operator of a participant node that offers the Ledger API as a service to third parties gains additional security. The non-repudiation middleware captures, validates, and stores cryptographic signatures on command submissions, providing the operator with a strong audit trail to evidence that Ledger API clients did indeed submit commands matching the recorded transactions.

Specific Changes

Enterprise Edition users have access to a new runtime component on Artifactory. The component proxies the Daml Ledger API and validates and stores cryptographic signatures on any calls to the command submission services.

Please refer to the documentation for more details.

Impact and Migration

This is a purely additive change.

Minor Improvements

  • When running Daml Script on the command line you will now see a Daml stack trace on failures to interact with the ledger which makes it significantly easier to track down which of the call fails. By default, you will only get the callsite of functions like submit. To extend the stack trace, add HasCallStack constraints to functions and those will also be included.
  • The Daml Assistant now also allows the sandbox port to be configured via --sandbox-option=--port=12345 instead of --sandbox-port. Other tools like Navigator, the JSON API and Daml Script will pick up the modified port automatically.
  • The trigger library now uses DA.Map instead of the deprecated DA.Next.Map if the targeted Daml-LF version supports it. This is a breaking change: Code that interfaced with the triggers library using DA.Next.Map, e.g. with Daml.Trigger.getCommandsInFlight or Daml.Trigger.Assert.testRule, will need to be changed to use DA.Map instead.
  • The Scala 2.12 version of the Scala Ledger API Bindings now depends on Scala 2.12.13 instead of Scala 2.12.12.
  • The compiler produces Daml-LF 1.12 by default. LF 1.12 significantly reduces transaction size.


  • gRPC status codes for inconsistency rejections and Daml-LF errors (ContractNotFound, ReplayMismatch) were brought in line with their intended meaning by changing them from INVALID_ARGUMENT to ABORTED.
  • DALFs in data-dependencies that are imported directly now require corresponding --package flags to make them visible. If you specify DALFs instead of DARs you also have to list all transitive dependencies, but typically you only want to expose your direct dependencies. Previously this was impossible. With this change, DALFs that are data-dependencies are no longer treated as main DALFs so you have more control over which packages get exposed.
  • The Scala Codegen now supports the generic Map type added in Daml-LF 1.11 properly. Previously there were some bugs around the variance of the key type parameter which resulted in Scala compile errors in some cases. See #8879.
  • A bug in the Daml compiler was fixed where passing --ghc-option=-Werror also produced errors for warnings produced by -Wmissing-signatures even if the user did not explicitly enable this.

Integration Kit

  • The implicit conversions between Raw types (and the conversions to and from ByteString) have been removed. You will need to explicitly convert if necessary. This should not be necessary for most use cases.
  • Added a test suite for race conditions to the ledger-api-test-tool

What’s Next

  • Despite the order of magnitude performance improvements we have already accomplished, this continues to be one of our top priorities. 
  • Improved exception handling in Daml is progressing well and expected to land in one of the next Daml-LF versions.
  • We are continuing work on several features for the Enterprise Edition of Daml Connect:
    • Working towards a stable release of the profiler for Daml. The profiler helps developers write highly performant Daml code.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.
  • A new primitive data type in Daml that allows arbitrary precision arithmetic. This will make it much easier to perform accurate numeric tasks in Daml.

Daml Developer Monthly – April 2021

What’s New

The anniversary of Daml’s open sourcing (“Daml Day”) was just a few days ago so happy Daml Day to our programmers, users, engineers, and all the wonderful folk that make Daml great!

Every quarter we make sure to recognize those users who went above and beyond in making Daml great; and we’ve just wrapped up our 4th community recognition ceremony, check out the winners and their contributions here! 

Want to skip reading and instead listen to this and earlier editions? Check out Richard’s podcast here.


We’re still hiring for many positions including Engineering, Client Experience, Business Development, and Sales. If you even has so much of an inkling that a job is for you then make sure to visit to apply and share with your network.

We spotted a new Daml programming job in the wild from Plexus.

What We’re Reading and Watching

Some of DA’s most successful women shared insights on the triumphs and challenges of their careers at our latest DA-Versity webinar.

Ed released two top-notch posts showing us how to manage certificate revocation and harden our PostgreSQL for Daml deployments.

Lakshmi Shastry, Principal Solutions Architect at Brillio walked us through how they are using Daml to optimize clinical trials.

Simon showed us that upgrading smart contracts need not be daunting. His latest blog post demonstrates the Accept-Then-Publish pattern as one solution to this problem.

I started giving “When Daml?” talks which are the spiritual follow-up to “Why Daml?” where I dive deeper into the pros and cons of using private smart contracts (as opposed to those running on public permissionless blockchains). Unfortunately we didn’t get a recording of this talk but keep an eye out for future “When Daml?” events.

As always Richard’s weekly privacy and security news posts are jam packed with interesting stories from the always interesting world of cyber security.

Community Feature and Bug Reports

György got us to add more useful error messages for duplicate record fields.

Quid Agis caught a bug in the CSS on our Daml Cheat Sheet (it was missing) so we added it back, hopefully it stays this time.

Amiracam spotted a deprecated method being used in our quickstart-java template.

Joel found that some of our intro templates weren’t compiling, and now they are 🙂

Daml Connect 1.12 is out!

  • Daml projects can now depend on packages deployed to the target ledger.
  • The Daml StdLib contains a new module DA.List.BuiltinOrder, which provides higher-performance versions of several functions in DA.List by using a built-in order rather than user-defined ordering. We observed up to five-fold speedups in benchmarks.
  • The Daml assistant now supports the Daml Connect Enterprise Edition (DCEE), which comes with additional features:
    • The DCEE has an Alpha feature to validate and store signatures on command submissions. This provides non-repudiation in cases where the Ledger API is exposed as a service.
    • The DCEE contains a Profiler in Alpha which allows you to extract performance information for Daml interpretation.

The full release notes and installation instructions for Daml Connect 1.12.0 RC can be found here.

What’s Next

  • Despite the order of magnitude performance improvements we have already accomplished, this continues to be one of our top priorities. 
  • Improved exception handling in Daml is progressing well and expected to land in one of the next Daml-LF versions.
  • We are continuing work on several features for the Enterprise Edition of Daml Connect:
    • Working towards a stable release of the profiler for Daml. The profiler helps developers write highly performant Daml code.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.
  • A new primitive data type in Daml that allows arbitrary precision arithmetic. This will make it much easier to perform accurate numeric tasks in Daml.

Tackling Counterfeit Drugs in the Global Pharma Supply Chain

Global sales for counterfeit drugs cost businesses billions of dollars per year. Drug counterfeiting affects human lives, business reputation, and return on investment for the entire pharmaceutical industry. According to the World Health Organization, it is estimated that up to 30% of pharmaceutical products sold in emerging markets are counterfeit, and about 1 million people lose their lives each year due to counterfeit medication.  

Lakshmi Shastry, Blockchain Architect at Brillio, explains the impact this issue has on the pharmaceutical supply chain, including recent regulations and what’s needed to address these industry challenges head-on. Lakshmi is also participating in a webinar hosted by Digital Asset on March 31st to discuss this topic with Guido Rijo, Vice President, Supply Chain Digital Transformation at Johnson & Johnson. Click here for more information and to register. 

The challenges

The pharmaceutical supply chain faces several challenges: numerous stakeholders with complex demands, lack of end-to-end process transparency, time sensitive and unorganized data.  The complexity of data quality only increases due to the number of internal and external stakeholders adding and changing data as drugs are researched, developed, and produced. It is difficult to monitor and validate information correctness while securing that information against human error and missing documentation. There is also the concern of opaque transactions. To date, manufacturers, logistics companies, wholesalers, and pharmacists have little to no visibility on the authenticity and quality of a drug in transit.

Recent Regulations

The US Drug Supply Chain Security Act (DSCSA) and the international Global Traceability Standard for Healthcare (GTSH) regulations are intended to protect consumers from counterfeit drugs. DSCSA requires pharma supply chain vendors to collaborate through an electronic, interoperable system that verifies a returned product’s authenticity before resale and track and trace all prescription drugs.

One of the core requirements of DSCSA is that every prescription medication must have a unique product identifier which takes the form of a 2D barcode. These federally mandated barcodes serve as foundational building blocks for a common data model.

Following the requirements set forth by the DSCSA, the FDA openly called for pilots to address three main challenges of the legislation:

  1. Establishing a product identifier
  2. Quality barcodes
  3. Achieving interoperability

High Level Architectural Considerations

A digital system is needed to securely record transactions across this complex multi-party supply chain network. Desirable properties of such a system include:

  • Create a definition of rights and obligations for each actor so that the data is tamper-proof, near-real-time and auditable.
  • Promote privacy and confidentiality of each party’s data.
  • Maintain visibility into a single version of truth,
  • Allow interpretability across the diverse technology stacks of each party engaged in the supply chain. 

This single version of truth and data integrity across parties will cancel out double counting and reveal possible instances of counterfeiting, diversion, spoofing, or man-in-the-middle attacks.   From a physical deployment perspective, we can realize such a solution in two ways.

  1. Hosted centrally by a trusted third party with every other party connecting into this centralized system using APIs.  While this model has been used for decades and is not new to us, it does have a major drawback in that every party has to maintain its copy of the data (or subset) for its operational processes. As time moves forward, the cost of constant reconciliations with this centralized system takes us back to the very problem we set out to solve. Such a solution also runs into regulations related to data domicile requirements, necessitating multiple data stores for such data and requiring associated reconciliations.
  1. The alternative is to leverage emerging technology such as Daml smart contracts that allow the creation of mutualized multi-party workflows by modeling each party’s rights and obligations. The output of these workflows is smart contracts that can reside on various data storage mechanisms ranging from a decentralized blockchain to traditional databases. Every party on the multi-party workflow can access the same real-time information even though the physical data may be in multiple locations to meet data domicile regulations. In this model enabled by Daml, individual parties do not need to maintain separate offline copies for their operational processes. 

Given the obvious benefits of the second approach above, we will outline it in more detail below. 

Business Process Overview

For those unfamiliar with Daml, it is an open-source, cross-platform smart contract runtime designed specifically for building distributed, multi-party workflows, and allows applications to work across multiple platforms with the ability for ledger interoperability. The Daml integrations, APIs, and runtime feature built-in safeguards that protect data integrity and privacy and help create an interoperable system in which multiple parties can connect, verify, and transfer pharmaceutical products with absolute trust in their authenticity, provenance and financial transactions. Daml helps as an augmented system with a Single Reference Smart Contracts Store, i.e., logical views of the same golden source based on confidentiality and access controls, multi-party shared common business process, and complete privacy between applications. Daml provides benefits beyond traditional technology, positioning users as the provider of choice with the given market.

Using Daml, all authorized stakeholders have transparency over the end-to-end drug delivery process. 

At each stage, a barcode is scanned and recorded onto a smart contract, which rests either on a blockchain or a series of connected databases managed by Daml interoperability. These records create the audit trail for the drug journey.  It can track every delivery, with the delivery driver traced through biometric measures. Every checkpoint involving the drug can be measured and recorded through several tools. It can also incorporate sensors into the supply chain with temperature or humidity recorded onto the ledger system. With a drug fully tracked from creation to patient, the supply chain becomes a holistic, accurate, audited, and secure process.

Smart contracts enable shared workflows, real-time information flow, and transparency with the ability to extend into the value chain, bridging silos within and between enterprises and reducing risks. End-to-end traceability and tracking enables trust among all involved parties for product integrity, item level fidelity, prompt recalls, incident investigations, dispute resolutions, and compliances across complex pharma supply chains. 

An initiating counterparty specifies contractual conditions, such as a required label with federal mandated 2D barcode, that must be adhered to by all custodians on the supply chain. At any point, i.e., if the device takes a temperature or humidity measurement that is out of range, the smart contract state will update to indicate that it is out of compliance, recording a transaction on the blockchain/database and triggering remediating events downstream.

Pharma Supply Chain Process

Using Microsoft Azure as the Underlying Persistence layer

Microsoft Azure combined with Daml offers multiple deployment topologies.  

While the centralized record can be stored on one AzureDB using the Daml for AzureDB on PostgreSQL Driver to streamline initial change management, the architecture allows for a future model where each party can then host its own “node” (either an AzureDB or blockchain node) to maintain additional physical privacy and compliance with data domicile requirements. The Daml smart contracts platform automatically manages the multi-party workflow across these individual “nodes” of each party. That is the interoperability property of Daml that creates a network of individual networks, each using their physical storage and applications technology stack.

The Microsoft Confidential Consortium Framework (CCF), a multi-party compute framework that leverages secure enclaves, can also be used to deploy the Daml multi-party workflows. Using CCF enables private and highly performant transactions that can execute with throughput and latency issues similar to a centralized database. The deployment topology operates like a blockchain system without data privacy concerns.

From integration to monitoring, network configurations, smart contract development, privacy and high-performance computing capabilities, Microsoft Azure powers a data-driven approach in digital supply chains for tracing, tracking, and verifying goods.   

Brillio’s three-step approach using Microsoft AzureDB and Daml is to:

  • Build a multi-party network using a rights and obligations model.
  • Simplify governance and management respecting each party’s technology choices.
  • Integrate the solution with existing systems and tools to reduce IT roadmap complexity.

Significant features of such a network can support more flexible confidentiality models, enable control over which authorized party’s transactions can be revealed, and improve energy efficiency with simplified proof-of-work and proof-of-stakes algorithms.


The solution architecture we outlined in this blog increases the pharmaceutical drug supply chain’s provenance, reduces counterfeits, and ensures compliance with regulations. Leveraging the Daml integration with Microsoft AzureDB and Internet of Things (IoT) technology increases quality compliance and visibility for temperature-sensitive biologics drug logistics. The solution provides a ‘chain of custody’ for pharma supply chain lifecycle management.  

The solution can integrate with external consuming applications for the extension of services that currently require intermediaries. These external processes include insurance, legal, brokerage, settlement services, delivery scheduling, fleet management, freight forwarding, and connectivity with business partners.

Over time, the architecture outlined above allows for the creation of a roadmap where business transactions (e.g., automating payments and transferring ownership between parties) can be onboarded to the core multi-party rights and obligations model. As this model evolves, it can also address the complexity of the CAR T-cell therapy supply chain where a patient is also part of the chain, and both information privacy and supply chain integrity needs to be maintained. 

Brillio’s Daml for Azure model provides a foundation to create such digitized workflows shared across supply chain business stakeholders, authorities, agencies, and ultimately consumers. Consortium-based applications ingest signals from relevant user interfaces and communicate with consuming apps of businesses across the consortium.

Please connect with me or join our upcoming webinar on March 31st to see how a combination of Daml, IoT, and cloud can overcome current supply chain challenges and power the future of supply chains. 

Join the Webinar

Release of Daml Connect 1.11.1

Daml Connect 1.11.1 has been released to fix a few bugs, namely:

  • An issue with the JSON API’s websocket heartbeating, which was causing trouble for some using this functionality.
  • Ledger Pruning was supposed to become a generally available feature with 1.11.0, but we missed to update the documentation to that effect and didn’t include it in the release notes. With 1.11.1, we are now retro-actively declaring Ledger Pruning stable on Ledger API 1.10.
  • There was a bug in CommandService and client bindings, which could cause problems in situations with a lot of timeouts.

You can install 1.11.1 using:

daml install 1.11.1

Complete release notes for the 1.11.x release can be found here.

Implementation of a Daml Solution for Optimizing Clinical Trials

Conducting clinical trials is a time consuming and costly process that brings along unique and intrinsic challenges. In a recent blog post, Lakshmi Shastry, Principal Solutions Architect at Brillio, talked about the challenges around the clinical trial process and highlighted the different ways multi-party workflow technology, including key components of Brillio’s solution with Daml, can address this industry challenge. In this 2nd blog post of a two-part series, Lakshmi will demonstrate how pharmaceutical companies and other clinical trial participants (i.e. Sponsors, CROs, Regulatory Agencies) can use Daml to collect and store patient/participant data and analyze results in a private yet collaborative manner. The end result is an improved and vastly more automated clinical trials process.

Implementing our solution

Let’s consider a sample clinical trial network with following participants:

  • Pharmaceutical Company (Sponsor)
  • Regulator
  • CRO’s
  • Hospital
  • Patient/Participant

Using Daml, we onboard the various parties onto the network while respecting each party’s consent and data sharing rules.

How does our solution solve the problems?

Improved protocol adherence and violation reporting

Our approach digitizes the protocol through Daml smart contracts based multi-party workflows that improves compliance, better management of protocol deviations, and faster audits.

For instance, smart contracts ensure that unless the patient/participant grants consent (through digital signatures), no data can be entered for the patient/participant. If subsequently the patient/participant withdraws consent, no updates to the patient/participant data can be made unless reauthorized by the patient/participant.

Improved data sharing and regulator collaboration

Regulators can act as a node and have real time access and visibility into the entire clinical trial generation and sharing process. As Regulators maintain their own version of a Distributed Ledger, they do not have to rely on Pharmaceutical Companies or the Healthcare provider for data which improves flexibility and can reduce the compliance requirements for pharmaceutical companies.

Daml has the ability to add parties as signatories or observers to provide data visibility. Please take a look at the below code snippet as an example.

In the above example, the provider and participant have the authority to exercise the choices. However, the CRO and coordinator can see the data but won’t be able to take any action on top of it. It will just provide the data visibility for observers.

Patient centricity

Since Permissioned Blockchains enable secure transfer of data, sets of contracts can be created to improve patient access and centricity. For instance, smart contract rules can be created such that patients will receive immediate ownership of their data at the termination of the Clinical Trial.

In Daml, every participant will record each visit in the ledger. Also, the provider can log the symptoms and doses as part of their visit. Please see the code snippet below:


We believe such a solution that preserves privacy while allowing for seamless collaboration has the ability to transform clinical data collection, data sharing, and operations. Clinical Trial Processes based on blockchain technology can streamline data transfer, ensure real time data access across clinical trial participants, reduce data entry errors, and simplify the clinical trial audit and compliance process.

In our platform, Daml and Microsoft Azure work together to improve the clinical trial process. Beyond clinical trials, this open solution architecture offers new areas of collaboration and data sharing across health care systems. We can now provide patients ownership of their data, as well as create economic and healthcare incentives for data sharing across participants while automatically enforcing regulatory requirements, thus providing enormous savings and rapid innovation at the same time.

For more information, please download our latest eBook that discusses the challenges and solutions in greater detail. I’ll also be speaking with Guido Rijo, Vice President, Supply Chain Digital Transformation, Johnson & Johnson on a panel hosted by Digital Asset on March 31st at 11:00 AM ET. We’ll be discussing supply chain issues in Pharma. Click here to register.

Try Daml

Components of a Successful Blockchain Project Part II: Fabric, Daml, and Catalyst

In our last blog post, Components of a Successful Blockchain Project Part 1, we discussed how Daml, Cord and Catalyst provide greater efficiency and technological innovation to financial services organizations. To continue the discussion, Robert van Donge from IntellectEU explains how Hyperledger Fabric, Daml, and Catalyst offer another route for businesses seeking a solution for complex multiparty workflows while simultaneously maintaining integrity and privacy. 

Leveraging Hyperledger Fabric’s Modular Architecture for Maximum Privacy and Flexibility 

Fabric is a permissioned blockchain platform well-suited for a variety of use cases in which transaction data needs to stay private between network participants. It was built on a modular architecture where distributed logic processing and agreement, transaction ordering, and transaction validation are processed in three phases with data shared either directly on-chain or privately through private data collections. 

Fabric’s modular architecture also provides benefits for network designers as they can plug in various components, such as identity management solutions, rather than build new ones. This modularity also enables Fabric’s pluggable consensus model and creates a network of networks for its users. Fabric is most notable for its open source framework that provides the flexibility for any solution model that supports distributed applications implemented as chaincode. It also has the ability to store both structured and unstructured data at the storage level. Fabric offers low latency compared to public blockchains and many other benefits ideal for a variety of use cases, hence the wide adoption and recognition among enterprise users. 

IntellectEU is among the founding members of the Linux Foundation’s Hyperledger Project and an active member of the Hyperledger community, as a subset of IntellectEU developers contribute code to the Hyperledger Fabric codebase, and regularly publish joint webinars with the Hyperledger team. Aside from product development, IntellectEU also offers consultancy and training, diving deep into specific Fabric concepts to help grow the awareness and expertise in markets globally.

Learn more about Hyperledger Fabric features here and reach out to an IntellectEU representative for more information on implementation and application development.

Building Daml-Driven Applications to Unlock the Power of Fabric

IntellectEU uses Daml to build multiparty solutions for a variety of different industries, including financial services, banking and telecommunications. Since Daml is a platform-agnostic smart contract development framework, businesses are able to freely design their applications using the open source Daml SDK, prove the value of the multiparty application by testing it on Project: DABL (the cloud-based ledger platform offered by Digital Asset), and then deploy to production with enterprise support across any supported blockchain or database platform. 

The Daml Driver for Fabric also provides additional privacy capabilities to Fabric, such as subtransaction privacy, to enable use cases that are simply not possible with Fabric alone. Through the Daml Driver for Fabric license, businesses receive enterprise support and production instances of their Daml-driven applications across any Daml supported platform, including Hyperledger Fabric. 

Since Daml is platform-agnostic and fully portable, businesses can decide to deploy on the underlying infrastructure that makes sense for the use case including both traditional databases and distributed ledger platforms. Daml also enables true cross-platform interoperability, meaning businesses can form consortias across various databases and blockchains, run the Daml interoperability protocol on their node, and store their data on their choice of underlying infrastructure while still sharing data in real-time on a need-to-know basis with participating business entities. 

Take a look at some of IntellectEU’s Daml-driven applications on the Daml marketplace. These solutions are ready for deployment and can be customized to meet specific business needs. Additionally, a complete end-to-end solution for Daml-driven securities services can be found here.

Implementing the Catalyst Blockchain Platform to Deploy, Manage, and Monitor Hyperledger Fabric Networks and Applications 

The Catalyst Blockchain Platform is a cloud-agnostic platform that simplifies deployment, maintenance and management of Hyperledger Fabric networks. The platform is designed so that businesses can focus on the added value that Distributed Ledger Technology provides (business case, consortium and chaincode development) rather than the hurdles and steep learning curve that often comes with the deployment, management and maintenance of blockchain networks.

The platform leverages all of the components of Hyperledger Fabric and combines these with an easy-to-use UI, on-chain governance capabilities, a customizable maintenance dashboard and other custom-built tools, all designed to integrate seamlessly with your current technology stack.

The Catalyst platform provides the tooling and automation necessary to deploy Daml-driven apps in production on Fabric. With Catalyst, developers and IT specialists are able to:

  1. Flexibly set up and configure networks and nodes.
  2. Perform all typical Fabric network and node management activities through the intuitive user interface.
  3. Leverage both the Fabric-level Certificate Authority as well as external certificate authorities on the node and network level.
  4. Monitor all Fabric nodes and networks through a customizable dashboard that integrates with existing monitoring software.
  5. Rely on a variety of disaster recovery, backup and failover mechanisms. 
  6. Deploy Daml workflows and smart contracts across both Fabric and/or Corda.

The entire network set up and maintenance is automated via Catalyst’s intuitive management console. Once this step is complete, the application will run and support more efficient multiparty workflows. 

Learn more about IntellectEU’s Catalyst platform here and start deploying/managing production environments today. 

IntellectEU is partnering with Digital Asset to revamp historically outdated business processes across the financial services industry. IntellectEU is a SWIFT service partner with a focus on digital finance and emerging technologies. IntellectEU is a founding member of the Linux Foundation’s Hyperledger Project and works with all leading blockchain providers in banking, insurance, capital markets, and the telecommunications space. 

Read more about Daml Drivers here.

Accepting Smart Contract Rollouts

Smart contracts offer a unique way to solve business problems but also present a unique challenge when it comes to managing the rollout of new and updated smart contract based applications. The ‘Accept-Then-Publish’ approach described below is a way to manage this complexity. The motivation for this approach is to allow smart contract application publishers and those managing distributed ledgers to answer the CTO ask below.

As the CTO of a trade association coordinating the exchange of smart contracts across many independent companies I want to be able to roll out new versions of smart contract applications in a controlled way.

Here, controlled is understood to mean coordinated but not synchronized. Coordinated in that it is reasonable to expect that companies will be able to upgrade their internal systems and roll out new software within a wide time window. Not synchronized in that companies need not upgrade at a fixed point in time in a ‘big bang’ way.


In this article a smart contract relates to the rules that govern the creation and valid transitions of smart contract instances. These lifecycle events involving smart contract instances are bundled in smart contract transactions. An application that creates and consumes smart contract transactions is a smart contract application. When improvements are made to a smart contract a new smart contract version will be created.

How Are Smart Contract Applications Different ?

In traditional applications messages are exchanged between systems. These messages are regularly text based, contain data specific to the exchange and are used to update a data store before being discarded.

Traditional Application

With this setup companies A and B could agree to update the Order message and it would not matter to company C.

With smart contract applications a transaction is shared. This will usually be as a binary serialization which has been digitally signed by one or more parties. Companies receiving smart contract transactions will verify them before recording them on their local ledger.

Smart Contract Applications

With smart contracts transactions a reference to the smart contract that governs which lifecycle transitions are valid is embedded inside the transaction itself. A consequence of this is that it is not possible to alter the smart contract version of a transaction once it has been created.


This example shows how a consortium of companies (A, B and C) will only be able to share transactions if all companies support the smart contract version used in the transaction. On the right hand side C does not support v3.0 causing failure.


  • Rolling out a new smart contract version will often require accompanying changes to other business processing software.
  • The main driver for taking the updated version is that it provides enhanced utility (as opposed to say patching an exploit).
  • The rollout should be performed with minimal down time.
  • It is possible for the application to support two smart contract versions in parallel.
  • Any migration of smart contract instances from earlier versions to more recent versions will be a follow-on exercise and is not considered here.


With the Accept-Then-Publish approach each company will upgrade their application to accept transactions based on the new smart contract version but for a period will continue to publish any freshly initiated transactions using the previous smart contract version. Any transition to a smart contract instance based on the new smart contract version will create a transaction also based on the new smart contract version.

On the left C can only accept v2.0 so A creates a v2.0 transaction which is accepted by C. On the right C has upgraded meaning that A can now safely start creating v3.0 transactions involving B and C.

This approach is not completely novel and is inspired by the HTTP Accept header used to advise supported media formats. Another inspiration is the TLS handshake used to determine which encryption algorithm to use when establishing a secure communication link. There are aspects of the version determination that are particular to smart contracts:

  • The chosen version must work for multiple parties not just two.
  • The chosen version may be based on a wider set of companies than those involved in the initial transaction. For example it may be that the transaction will be disclosed to a number of broker companies later in its lifecycle.
  • As many distributed ledgers are asynchronous the choice of which smart contract version to use may need to be made while one or more companies are unavailable or offline.


To allow companies to share the versions they support there must be a way for companies to discover which smart contract versions other companies accept.

On the left A broadcasts to B and C that it accepts v3.0. On the right A inquires whether B and C support v3.0, B does C does not.

As we are working with a distributed ledger a smart contract, possibly in Daml, would be an obvious choice to achieve the above.

The corollary of notifying partner companies of newly supported versions is notifying them that support for an old version is being withdrawn. This could be achieved in a similar way by notifying partner companies of an end-of-life support date for a particular version.

Which version a company is publishing does not need to be shared, it is the business of the publishing company to make the decision about which version to publish.


The choice of which contract version to publish will depend on a number of factors so different strategies could be used:

Message by Message
In the situation where the contract is not intended to be disclosed to any additional companies the most functional version could be automatically selected just prior to contract creation.

When the company owner determines that all partner companies now support the most functional version they switch to permanently publishing that version. How this is achieved is an implementation detail, it could be via configuration or web service.


Following the Accept-Then-Publish pattern proposed above our hypothetical trade association CTO has a way to ensure the smooth running of his network across smart contract versions.

How to Streamline and Accelerate Clinical Trials

Conducting clinical trials is a time consuming, expensive affair that involves close collaboration between multiple stakeholders (parties), often geographically distributed, and one that needs a high level of monitoring, regulation, and precision to maintain patient privacy and auditability. In this two-part blog series, Lakshmi Shastry, Principal Solutions Architect at Brillio, explains the challenges around clinical trials and how emerging multi-party workflow technology can be leveraged to improve the clinical trial process.

Challenges of conducting clinical trials

The multi-party and distributed nature of the clinical trials process brings along unique, intrinsic challenges such as:

  • Multiple participating entities and workflows having disparate databases that must be constantly reconciled
  • Privacy and confidentiality considerations for each stakeholder and individual participant
  • Complex data collection, protection, sharing and reporting requirements across enterprise boundaries
  • Patient engagement over long periods of time
  • Data domicile and compliance needs
  • Multiple entities using different technologies for critical processing
  • Manual work that increases risk for data accuracy

Using emerging multi-party workflow technology such as Daml, we can solve some of the problems described above. We will talk about Brillio’s Blockchain-based Clinical Trial solution in the subsequent section and how it can be put in action.

Current implementation of the clinical trial process

Nowadays, different steps of clinical trials are conducted independently of each other. This is partly because of legacy technology and legacy approaches to conducting clinical trials, and partly because of no better alternatives.

The current process entails the following:

  • Data gets created for multiple patients/participants from multiple networks – hospitals, clinical trials, smart devices. 
  • Data is then collected and entered into a Centralized Database Management System (DBMS) separately for each organization. 
  • Different pharmaceutical companies, hospitals, CROs, Biotech, laboratories have their own decentralized databases. Different organizations then collate and store data in their own preferred way and own preferred format in their own IT environments. 
  • Data is then analysed separately within each organization and exchanged in different forms between various organizations for regulatory reporting, business analytics, research and other purposes.

In this model, not only collaboration is difficult between participating organizations, it also complicates communication internally, resulting in more reconciliation and more human error.  Additionally, effort expended to solve data consistency issues is large. The proverbial wheel is reinvented for each trial.

Brillio’s solution

Brillio’s solution is a permissioned multi-party network in which:

  • Various Stakeholders such as pharma companies (Sponsors), CROs, Regulators (FDA, EMA, etc.), Clinics, and Patients participate within a networ
  • Data sharing and clinical trial execution takes place in a distributed and secure manner with custom Daml workflows based on the Clinical Trial Protocol
  • All Stakeholders maintain their own local, private version of the data which is automatically synced under the hood by Daml based on workflows and privacy rules agreed between parties (i.e. no more reconciliation or checks and balances)
  • Each party remains in sync in real time with the latest state of the clinical trials business process, seeing only the data they are supposed to see. This avoids costly privacy gaps because no tack-on authorization models are needed. Azure cloud is leveraged to provide high performance and scalability. However, the solution can meet each participant’s cloud preferences if needed.

Key parts of Brillio’s solution


Parties refer to the Pharmaceutical companies (Sponsors), Regulator, CRO’s, and Hospital that participate in the network. For instance, each Pharmaceutical company is a party. Similarly, each Hospital and each regulator (i.e. FDA, EMA) is a party.


Networks represent the multiple organizations, different identities, and associated data visibility rules. Data is shared only within a network between the participants.

Transaction History

Transaction History is the historical log of all the transactions on the network. Since the underlying ledgers are append only, all the transactions can be replayed to arrive at the current state. Each participant in the network stores their own version of the Ledger which is automatically synced as required based on the privacy and data sharing rules. Transaction History is available to view for all allowed participants in the network in real time.

Smart Contracts

Smart contracts or multi-party workflows of Daml change and control the state of the ledger within the network. These rules also automatically enforce authorization as well as physical domicile of the data thus simplifying compliance.

Brillio uses Microsoft Azure that offers frameworks and tools to simplify development of this network on cloud.  From integration to monitoring, network configurations, multi-party workflow development, privacy and distributed computing capabilities, Microsoft Azure powers a data-driven approach in clinical trials.  

Brillio’s  three step approach using Microsoft Azure and Daml is to:

1. Build a multi-party network using a rights and obligations model

2. Simplify governance and management respecting each party’s technology choices

3. Integrate solution with existing systems and tools to reduce IT roadmap complexity

Daml is an open-source runtime designed specifically for building distributed, multi-party workflows, allowing applications to work across multiple underlying data storage platforms (databases or blockchains) with interoperability between different networks.  Daml integrations, APIs, and run-time features are built-in safeguards that protect data integrity and privacy, and create an interoperable system in which multiple parties, including Pharmaceutical companies (Sponsors), Regulators, CRO’s and Hospitals can connect to with absolute trust in their Clinical Trial Process. Daml thus provides significant benefits beyond traditional technology, positioning users as the provider of choice with the given market.

Microsoft Azure serves as the foundation to manage these networks for digitized workflows shared across clinical trial stakeholders.   

We believe such a solution that preserves privacy while allowing for seamless collaboration has the ability to transform  clinical data collection, data sharing, and operations. Clinical Trial Processes based on blockchain technology can streamline data transfer, ensure real time data access across clinical trial participants, reduce data entry errors, and simplify the clinical trial audit and compliance process.

Stay tuned for part two of our blog that will focus on how to use Daml to improve the clinical trial process.

Read more in our eBook, “Innovative Healthcare Systems for Clinical Trials and Drug Delivery in Pharmaceuticals”. Download it for free today.

Download the eBook

Release of Daml Connect 1.11.0

Daml Connect 1.11.0 has been released on Wednesday March 10th. You can install it using:

daml install latest

Want to know what’s happening in our developer community? Check out the latest update for this month.


  • Daml Ledger API 1.10 with Daml-LF 1.12 is now stable, significantly reducing the size of Daml transactions.
  • Daml-LF 1.11 is now the default. The previous default was Daml-LF 1.8.
  • daml test now includes coverage output so you can easily see which Templates and Choices are tested.
  • Daml Studio will now provide the last working state before a failed transaction, greatly streamlining debugging a broken transaction.
  • Daml Driver for PostgreSQL now has a separate migration mode and a configurable DB connection pool.

Impact and Migration

The only impact of this release is that with the switch from Daml-LF 1.8 to Daml-LF 1.11, the “no” contract-id-seeding mode in Sandbox Classic is no longer available by default. As already indicated in the Daml Connect 1.10 release notes, users must either pin the Daml-LF version to Daml-LF 1.8, or explicitly choose a new seeding mode. The recommended mode is “strong”.

What’s New

New Daml-LF and Ledger API Versions


Daml-LF is Daml’s intermediary language, analogous to Java bytecode. With Daml Connect 1.11, Daml-LF 1.12 is now stable and frozen, and Daml-LF 1.11 is the new default version. Daml-LF 1.12 is supported by Daml Ledger API 1.10 upwards. The changes in Daml-LF 1.12 are non-functional. Significantly smaller transaction sizes on the wire and in storage provide performance improvements.

Daml-LF 1.12 will become the new default version with the April 2021 Daml Connect release. Until then, you can enable Daml-LF 1.12 in your daml.yaml file by adding the stanza:

  - --target=1.12

Specific Changes

  • Daml-LF 1.12 is now stable
  • Daml-LF 1.11 is now the default version (previously Daml-LF 1.8)
  • Daml Connect and integration kit now expose Daml Ledger API 1.10

Impact and Migration

Sandbox Classic’s default contract seeding mode no is not compatible with Daml-LF 1.11. If you use Sandbox Classic, you need to either pin the Daml-LF version to <= 1.8 or switch contract seeding mode. If you do neither, Sandbox Classic will present you with an error informing you that you need to do one or the other. 

We recommend doing one of the following:

If you are relying on the human-readable, stable contract-ids for demo purposes, pin the Daml-LF version by adding this stanza to your daml.yaml:

  - --target=1.8

If you would like Sandbox Classic to resemble production ledgers more closely, switch contract id seeding mode by adding:

  - --contract-id-seeding=strong

Test Coverage in Command Line Test Tooling


The daml test command runs all Daml Scripts (and Scenarios) in a Daml project and reports their results. A common feature request has been to also report on test coverage and allow the inclusion of tests in dependencies, thus offering better support for multi-package projects. The new features provide both of these capabilities.

Specific Changes

  • The daml test command has gained a flag --all, which includes scripts and scenarios from dependencies in the list of tests being run.
  • The daml test command has gained a flag --show-coverage, which shows a test coverage report.

    For example:
daml/User.daml:test: ok, 2 active contracts, 3 transactions.
test coverage: templates 50%, choices 50%
templates never created:
choices never executed:

Impact and Migration

This is a purely additive change.

More information for failing Scripts in Daml Studio


Successful Daml Script executions in Daml Studio show a tabular view of the final ledger state by default. With this change, that view is now also available for failed executions, making debugging a failure easier.

Specific Changes

  • The view of a failed script in Daml Studio now allows switching to table view, which looks like this:

Impact and Migration

This is a purely additive change.

Improved database management in Daml Driver for PostgreSQL


The Daml Driver for PostgreSQL has gained two new features that make it easier and safer to operate in a production environment.

Specific Changes

  • All functions that require database schema changes have been separated into a special “migrate” mode. This way the driver can use a less privileged database user account in normal operation. The migrate mode is enabled via flag --sql-start-mode migrate-only. See the documentation for full details. 
  • The database connection pool size is now configurable to prevent the driver from establishing too many connections at once. This is done via the --database-connection-pool-size flag.

Minor Improvements

  • You can now disable linter (dlint) hints for a specific function by adding pragmas of the form {- DLINT ignore functionName "hintName" -} to your Daml source files. For example:
{- DLINT ignore noHint "Use concatMap" -}
noHint f xs = concat (map f xs)
  • More logging of Ledger API activity
    • Incoming requests to API services are now logged at INFO level. Note that this is the default log level for the Sandbox. To emit these from the Sandbox, add the sandbox option --log-level=INFO. A sample message is
17:06:27.099 [] INFO  c.d.p.a.s.ApiSubmissionService - Submitting transaction (context: {readAs=[], submittedAt=2021-03-03T16:06:27.093385Z, applicationId=myproject, deduplicateUntil=2021-03-04T16:06:27.093385Z, actAs=[Alice], commandId=91f433da-daae-49d3-9336-783b86956a60}) 
  • Transactions and transaction trees returned by the TransactionService are now logged at DEBUG level. To emit these from the Sandbox, add the sandbox option --log-level=DEBUG. A sample message is
17:11:35.771 [] DEBUG c.d.p.a.s.t.ApiTransactionService - Responding with transactions: List(Map(commandId -> 8863aaac-3597-4001-b839-e1813f4576ba, transactionId -> 0A2439343435613763382D316665352D333635382D383933612D393033373966313763353231, workflowId -> , offset -> 00000000000000070000000000000000)) (context: {startExclusive=00000000000000050000000000000000, endInclusive=, parties=[Alice]})
  • Scala Ledger API Bindings and Codegen output are now also available for Scala 2.13. Note that the Scala Ledger API Bindings and Codegen are deprecated.


  • A bug preventing the Scala Codegen from working when there are any Daml packages called Set was fixed. See #8854 for details.
  • A JWT token is no longer required to call methods of Health and Reflection services.

Integration Kit

  • Upgrade dependency io.opentelemetry:opentelemetry-api to 0.16.0
  • New dependency: io.opentelemetry:opentelemetry-context:0.16.0
  • The various uses of ByteString in kvutils are now strongly typed. Users of kvutils will need to use the Raw.Bytes subtypes to represent the various keys and values that flow through a kvutils-based driver. ByteStrings representing keys need to be wrapped in either Raw.LogEntryId or Raw.StateKey, and ByteStrings representing value envelopes need to be wrapped in Raw.Envelope. This prevents accidentally confusing one for the other. Implicit conversions have been provided for Scala users, which should  ease the transition. They are marked @deprecated and will be removed before the release of DAML SDK v1.12.
  • Added the CLI options api-server-connection-pool-size and indexer-server-connection-pool-size to configure the database connection pool size for the Ledger API Server and the indexer respectively.
  • Fix potential out of memory issue when running migrations on an existing index DB with many events/contracts/transactions.

What’s Next

  • Despite the order of magnitude performance improvements we have already accomplished, this continues to be one of our top priorities. 
  • Improved exception handling in Daml is progressing well and expected to land in one of the next Daml-LF versions.
  • We are continuing work on several features for the Enterprise Edition of Daml Connect:
    • A profiler for Daml, helping developers write highly performant Daml code.
    • A mechanism to verify cryptographic signatures on commands submitted to the Ledger API, improving security in deployment topologies where the Ledger API is provided as a service.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.
  • A new primitive data type in Daml that allows infinite precision arithmetic. This will make it much easier to perform accurate numeric tasks in Daml.