How to Optimize the Complex Workflows Across the Insurance Value Chain with Daml and Blockchain

As the insurance industry continues through this period of radical change, the path towards digitization can be challenging with numerous options spanning AI, blockchain, IoT, smart contracts and more. In this guest blog, Rajesh Dhuddu, Head of Blockchain Competency, Tech Mahindra, explains some of the challenges and how blockchain with Daml smart contracts can alleviate some of these pressures for insurers, creating the most optimal environment across the insurance value chain, ultimately enhancing the customer experience.

It all starts with a complex network of intermediaries 

Insurance companies take products and services to market through a complex network of intermediaries, i.e. brokers, agents, affinity partners, clearing houses and several others. The intermediaries sell policies, collect or facilitate insurance premiums and provide various services to the end customer on behalf of the insurance companies. The management of business placed by intermediaries is very complex, involving multiple systems and workflows across the functions of pricing and underwriting, claims processing, reinsurance, investigation units, and vendor management. 

Followed by numerous workflows across multiple disparate systems

These workflows and associated systems provide limited visibility, inaccurate data, involve extensive manual processing, high reconciliation costs between various parties and often lead to lengthy settlement times. While these inefficiencies cause delays in managing the policy life cycle, they also present an opportunity to improve the efficiency by adopting new technologies such as Daml, a multi-party application platform, and blockchain. 

Leading us to a solution built around blockchain and Daml

With blockchain and Daml, insurers can increase efficiency of new insurance product distribution across intermediary networks, efficiently manage the intermediary book of business, and also manage various other services like claims processing and settlement more efficiently. 

Tech Mahindra has partnered with Digital Asset to create a broker insurance solution and platform that can be easily integrated into the existing IT ecosystem of an insurance company. 

The platform addresses the following challenges for broker placed insurance:

  • Inefficient and costly new policyholder onboarding processes
  • Manual data sharing with compliance risk between parties
  • Limited insight into real-time data for pricing and policy quotes 

This Daml-driven solution will help insurers manage broker placed insurance efficiently, automate the onboarding experience, and manage complex pricing approval workflows for standard and non-standard insurance products. 

Continue the discussion with us

On May 20th at 10 am EST, blockchain and insurance experts from Digital Asset and Tech Mahindra will continue this discussion in a webinar. Join us to learn more about how Tech Mahindra is leveraging Daml to enable more efficient multiparty processes for broker insurers.

Click to Register for the Webinar

Release of Daml Connect 1.13.0

Note: There is a performance regression in the Daml Driver for Postgres in this release. We are working on a patch and will release it soon. Please hold off upgrading until then.

Daml Connect 1.13.0 has been marked stable on Wednesday May 12th. You can install it using:

daml install latest

Want to know what’s happening more broadly among Daml developers? Check out the latest Daml Developer Monthly.

Summary

Daml Connect 1.13 is primarily a usability and bugfix release, but also introduces several features in Early Access for the first time:

  • Record Dot Updates in Daml provide a much easier way to manipulate deeply nested data structures.
  • BigNumeric, a new Daml data type for arbitrary precision arithmetic is now available in Beta.
  • Daml Script Export is now in Alpha. It’s a tool that allows you to extract data from a live ledger into a Daml Script. Think “SQL Script Exports” for Daml!
  • The Enterprise Edition of Daml Connect now has Alpha Oracle Database support for all runtime components. This goes hand-in-hand with a recent Alpha release for the Daml Driver for Oracle Database.

Impact and Migration

This release is almost entirely additive. Its main purpose is to allow you to benefit from recent fixes and try out upcoming features in Early Access.

The only change for which we recommend action is that the InsertOrdMap type in the Scala bindings is now deprecated. This used to be how the Scala Codegen represented the Daml GenMap type. We recommend using the provided Primitive.GenMap type instead.

What’s New

Record Dot Updates

Background

Daml is designed with safety and usability  in mind, trying to make it as easy as possible to build confidence in written code. One property of the language that is firmly on the safety side is that all data is immutable. With Daml’s record dot syntax, a feature that was ultimately upstreamed into the Glasgow Haskell Compiler, we invested heavily in also providing usability via familiar access patterns. With record dot updates, the same usability is now coming to data manipulation.

Specific Changes

Nested record fields can now be updated in an intuitive way:

data X = X with
  a : Y
data Y = Y with
  b : Int

beforeRecordDotUpdates : X -> X
beforeRecordDotUpdates x = x with
  a = x.a with
    b = 1

withRecordDotUpdates : X -> X
withRecordDotUpdates x = x with a.b = 1

Impact and Migration

This is primarily a usability improvement, but if you are using advanced programming techniques like lenses to accomplish the same thing, we recommend switching. Record dot updates have significantly better performance characteristics.

Early Access Features

BigNumeric and Daml-LF 1.13 in Beta

Background

Daml’s inbuilt Decimal and Numeric data types are fixed point numbers with a fixed precision of 38 decimals. In some numeric applications, they can require careful management of scale, or pre-condition checking to preserve the required precision in calculations.

The new BigNumeric removes this burden from the developer by allowing them to specify an arbitrary precision that is used for any intermediary results. 

Specific Changes

  • Daml-LF 1.13 is available in the compiler, IDE and Sandbox and can be activated using the build option --target=1.13.
  • Daml has a new data type BigNumeric. BigNumeric represents any positive or negative decimal number with up to 2^15 digits before the decimal point, and up to 2^15 digits after the decimal point.
  • BigNumeric is not serializable, it is only intended for intermediate computation. You must round and convert BigNumeric to a fixed-width 128-bit decimal Numeric n in order to store it in a template.
  • BigNumeric is used via conversion functions fromNumeric and fromBigNumeric to convert between BigNumeric and Numeric n, which includes the alias Decimal == Numeric 10.
  • The Standard Library module DA.BigNumeric provides functions for division and rounding.

Impact and Migration

If you have calculations with critical and hard to manage precision, we recommend trying out BigNumeric and preparing your project for the stable release of this feature in an upcoming release.

Daml Script Exports in Alpha

Background

A common task both during development as well as production debugging, support, or maintenance is to extract part of the state or history of a system for further processing or analysis.

With Daml Script, we have a single format that can be imported in the IDE, Sandbox and production ledgers already. It is Daml’s equivalent of SQL scripts.

Daml Script Exports provide the export counterpart, providing a single flexible way to move ledger data between different production and development environments.

Specific Changes

  • A first version of a daml ledger export script, which allows you to extract ledger history for a set of parties into a Daml Script. See documentation for full use and caveats. Sample use:
daml ledger export script --host localhost --port 6865 --party Alice --party Bob --output ../out --sdk-version 1.13.0

Impact and Migration

This is the first alpha version of a new, entirely additive feature. If you try it out, we’d love to hear your feedback and ideas on how to make it as useful as possible, for example via the Daml forum.

Enterprise Edition support for Oracle DB in Alpha

Background

We are working to give users of the Enterprise Editions of Daml Connect and Drivers more choice in the underlying databases they use for local persistence. The addition of Oracle Database support in Daml Connect goes hand in hand with the Daml Driver for Oracle Database, which was also released in Alpha recently.

Specific Changes

  • In Daml Connect Enterprise Edition, the JSON API and Trigger Host now accept Oracle Database JDBC strings.

Impact and Migration

This feature is purely additive and can be tried out. It is not yet ready for production use and should not be relied upon at this point.

Minor Improvements

  • daml build accepts a new flag --access-token-file. It accepts a path to an access token to authenticate against the ledger API. This is needed if the project depends on a remote Daml package hosted on such a ledger with authentication. The path to the token can also be specified in the daml.yaml project file under the ledger.access-token-file field.
  • A new daml packages list command has been added to the Daml Assistant. It lists packages deployed on a remote Daml ledger.
  • The already deprecated Scala Bindings and Codegen have been tidied up to remove the need for the InsertOrdMap type. The type continues working, but we recommend switching to the Primitive.GenMap type instead. 
  • Ledger API client read requests are now logged at the INFO level. This affects the following services
    • Active Contracts Service
    • Command Completion Service
    • Ledger Configuration Service
    • Ledger Identity Service
    • Package Service
    • Time Service
    • Transaction Service
  • The useStreamFetchByKey and useStreamFetchByKeys functions in the JS @daml/react library now correctly expose the closeHandler of the underlying @daml/ledger library. Their docstrings have also been corrected. Thanks to Huw for reporting this issue on the forum.

Bugfixes

  • Fixed a bug in the Daml Engine where it was possible to fetch/lookup/exercise a local contract by key even if the reading parties are only witnesses, not stakeholders. See issue #9454 for details.
  • Fix a bug where transient duplicate keys did not result in an error. See #9478. Thanks to Liam for reporting this issue on the forum.
  • Fix a bug where the ledger returned InconsistentKeys instead of DuplicateKeys. See #9457.
  • Fix a bug that was preventing submissions with interdependent commands to succeed. See #9370 for details. Thanks to Huw for reporting this issue on the forum.
  • Fix a bug in the JSON API with a PostgreSQL backend where range queries within variant data would not return matching data. See #9321.

Integration Kit

  • TelemetryContext has been introduced to the WriteService.submitTransaction method to support distributed tracing.

What’s Next

  • Work on the Early Access features introduced in this and previous releases will continue to bring them to general availability.
    • Daml Profiler (Enterprise Edition only)
    • BigNumeric
    • Daml Script Export
    • Oracle DB Support
  • In addition, we are expecting to land two new language features in Beta with one of the next releases:
    • Exception handling, which provides try/catch functionality with subtransaction rollbacks.
    • Nested record updates, making it easier to change fields in deeply nested records.
  • Work on improving the Performance of all Daml integrations continues under the hood.

Daml Developer Monthly – May 2021

What’s New

We raised $120m in Series D funding to further our goal of turning disparate silos into synchronized systems and Xpansiv has chosen Daml to scale their new ESG (Environment, Social, and corporate Governance) platform for global commodity markets!

We want feedback on our new Daml Script Exports feature, check out the docs here, and if you have thoughts on what works and what should be improved then hop over to the Features and Bugs section of our forum and make a post.

Jobs

We’re still hiring for many positions including Engineering, Client Experience, Business Development, and Sales. If you even has so much of an inkling that a job is for you then make sure to visit digitalasset.com/careers to apply and share with your network.

Also if you are a member of an underrepresented minority group working in the tech field (or trying to get into it) and want to practice your interviewing skills at a mock interview then please reach out to us.

We were recently at Women Hack NYC. Women Hack holds these events frequently (and virtually) around the world so if you are or know any talented women developers, designers, or product managers then join the event to talk to some of the top tech companies in NYC!

What We’re Reading, Watching, and Attending!

We’ll be (virtually) at Hyperledger Global Forum on June 8-10th. HGF has a jam-packed schedule that you can see here and tickets are available for $50. Also you might want to attend our interactive session comparing UTXO and Account models and make sure to bring your own questions and thoughts!

György recently showed us how Daml can be used as a replacement for Fabric Composer.

As always Richard’s weekly privacy and security news posts are jam packed with interesting stories from the always interesting world of cyber security. These latest ones cover everything from Beavers knocking out internet to the latest battles over privacy.

Shaul has been working on a new video series for conceptualizing Daml, if you’re having trouble wrapping your head around Daml then be sure to bookmark it and check back for new videos!

Quid Agis has an excellent tutorial on testing Daml templates using Script, check it out!

We recently presented “When Daml? Do all smart contracts need to be permissionless and decentralized?” at Hyperledger NYC, you can watch the recording on YouTube.

Community Feature and Bug Reports

Below are the bugs and feature requests that our community members pointed out and are subsequently fixed by all the wonderful teams at DA!

Huw found two bugs, one where contract Ids were going stale, and another where useStreamFetchByKeys was missing the closeHandler argument.

Liam found a bug where an InconsistentKeys error was returned instead of DuplicateKeys.

Quid Agis found a missing instruction in one of our reference apps.

Kirk found an outdated interactive tutorial that we’ve now updated. Whoops!

Michal pointed out some missing code snippets in one of our blog posts.

Daml Connect 1.13 is out!

Daml Connect 1.13 is primarily a usability and bugfix release, but also introduces several features in Early Access for the first time:

  • Record Dot Updates in Daml provide a much easier way to manipulate deeply nested data structures.
  • BigNumeric, a new Daml data type for arbitrary precision arithmetic is now available in Beta.
  • Daml Script Export is now in Alpha. It’s a tool that allows you to extract data from a live ledger into a Daml Script. Think “SQL Script Exports” for Daml!
  • The Enterprise Edition of Daml Connect now has Alpha Oracle Database support for all runtime components. This goes hand-in-hand with a recent Alpha release for the Daml Driver for Oracle Database.

Impact and Migration

This release is almost entirely additive. Its main purpose is to allow you to benefit from recent fixes and try out upcoming features in Early Access.

The only change for which we recommend action is that the InsertOrdMap type in the Scala bindings is now deprecated. This used to be how the Scala Codegen represented the Daml GenMap type. We recommend using the provided Primitive.GenMap type instead.

The full release notes and installation instructions for Daml Connect 1.13.0 can be found here.

What’s Next

  • Work on the Early Access features introduced in this and previous releases will continue to bring them to general availability.
    • Daml Profiler (Enterprise Edition only)
    • BigNumeric
    • Daml Script Export
    • Oracle DB Support
  • In addition, we are expecting to land two new language features in Beta with one of the next releases:
    • Exception handling, which provides try/catch functionality with subtransaction rollbacks.
    • Nested record updates, making it easier to change fields in deeply nested records.
  • Work on improving the Performance of all Daml integrations continues under the hood.

Digital Asset Open Sources Daml Code for CBDC Interoperability

Over the past year, CBDCs have emerged as the holy grail of cross-border payments, making payments faster and more cost-effective for wholesale and retail markets. While CBDCs promise to deliver significant value, there are several hurdles central banks need to clear. Top of mind is which technology to use – DLT, centralized database, or existing rails and with that comes the question of interoperability across platforms. Last November, Digital Asset announced its plans to open source its example of a CBDC implementation highlighting some potential features around programmable money and interoperability built using Daml. Today, Darko Pilav, Head of Switzerland and Customer Experience Engineering Team at Digital Asset, shares details about the work, which is now open source and available for public consumption.

Let’s talk about CBDCs – what are they and how do they differ from cash, and bank deposits that we use today?

CBDCs are a combination of the bank deposits you hold at your commercial bank, and physical cash. With CBDC you get the benefits of the ease of use of deposits for any kind of payment, with the additional benefits of physical cash being backed directly by the central bank instead of by an intermediary (i.e. commercial bank). Effectively you can see a CBDC implementation as the extension of the central bank ledger to a broader audience. Compared to physical cash, you gain the flexibility of doing instantaneous payments to any eligible recipient – you don’t have to physically move bills and coins. 

The difference to bank deposits is in its risk profile. Given that CBDC are issued and backed by the central bank itself, there is no risk of defaulting. Today your money on your bank account is not directly backed by the central bank. It is rather a liability of your commercial bank, which also means that if the commercial Bank goes bankrupt, your assets are at risk. To combat that, a number of countries are providing government guarantees up to a certain amount, i.e. the government will step in and ensure owners of the deposits will not lose their money. But this is a complicated process, the covered amount is typically limited, and it effectively tries to emulate properties that you would get with CBDC out of the box.

Darko Pilav

Can you tell us about the CBDC implementation work you have been doing?

The CBDC implementation is an example of interoperability, the possible capabilities of CBDC, and how central banks can use Daml, an application platform created by Digital Asset, to integrate CBDCs across different blockchain and database platforms. It is a simplified design, which was purposely done so that we could focus on the important properties of interoperability without having to cover all the intricate details of a production solution. We have built this example using Daml to showcase all of the above, and have made it open source and available for public use.

What does a successful CBDC implementation look like? 

From our perspective, we believe seamless, built-in interoperability is the only way for CBDC’s to reach their full potential. Of course, a great follow on question is what do we mean by interoperability? For Digital Asset, true interoperability includes four key components: multi-ledger technology, cross-ledger atomicity, data privacy, and composable extensibility. 

Multi-ledger technology: The ability to deploy and connect digital currency systems across disparate networks regardless of the underlying IT infrastructure. Top among the challenges is deciding which technology to use  – DLT, centralized database, or existing payment rails. On the back of that is the compatibility with other CBDCs. There is not going to be one ledger to rule them all. Some might not even use DLT. Ensuring that CBDCs are compatible with other CBDCs is an imperative first step, else, the CBDC runs a great risk of hitting a dead end at the start. 

Cross-ledger atomicity: If one leg of a transaction fails, all sides fail. By ensuring atomicity, systems can achieve payment versus payment and delivery versus payment without the risk of handing over the goods when the payment leg fails and without the need for a central authority acting as an escrow. The counterparty risk or delivery risk are eliminated.

Data privacy: Almost all non-Daml blockchains lack crucial properties of privacy, leaking significant transaction information to the world. Some infrastructures have addressed some of the privacy concerns but lack the ability to guarantee their privacy mechanisms when transacting across chains. Any CBDC solution must feature the highest need-to-know level of privacy independent of the infrastructure that it is transacted on.. 

Yuval Rooz Interoperability Demo at OECD Global Blockchain Policy Forum

Composable extensibility: This means the ability to dynamically add new applications and connect to other networks on the fly. Imagine after a government introduces CBDC, an e-commerce company wants to tie them into their existing workflows by allowing payments to be done with CBDC. In order to maintain all the crucial properties mentioned above (atomicity, privacy, etc.) the e-commerce company can not rely on standard APIs. It must rather compose their payment process with the CBDC transfer functionality. With Daml, this is trivial to do. Furthermore, using non-Daml technology, the only way to achieve this would be to run both the CBDC solution as well as the e-commerce solution on the same platform. This is of course not practical, as a central Bank would not and should not agree to run a number of 3rd party solutions on their platform. Daml’s interoperability eliminates this problem as well. The e-commerce company operates its solution on their own infrastructure and it simply interoperates with the CBDS solution of the Central Bank.

One additional remark that I would like to add is that historically the entity managing an account is also the same entity that provides services for said account. Hence the question comes up whether this means that in a retail CBDC scenario, the central bank will have to provide all the services of a bank account to the broad population. The answer is that if you are using the right technology, and you design the CBDC solution correctly, the concern of account management and provision of services can be disentangled. This would give the opportunity for commercial banks that already have the customer relationship, and have a lot of knowledge and experience in the provision of these services to continue to do so, while the central bank is simply the maintainer of the account.

How does Daml fit into the CBDC ecosystem?

Daml provides a number of crucial properties right out of the box. A few examples would be true cross technology interoperability, extensibility, and incredibly high levels of privacy. To the best of our knowledge, no other technology is capable of providing all of the above to such a high level as Daml. Our technology stack works with the major enterprise blockchains as well as centralized databases and other systems. In our opinion these are crucial properties, for a large number of different use cases and particularly for CBDC.

Why did Digital Asset open source the code?

CBDCs have great potential to transform cross-border and domestic payments in both wholesale and retail markets. Given its importance to markets worldwide, we want to make sure everyone involved in building a CBDC solution has access to the components that will make it successful. It’s a great starting point for anyone needing to build applications that will eventually need to integrate with CBDCs.

Secondly, we also want to show how straight forward it is to implement various features of CBDC and make it interoperable across technologies when using Daml. Our technology stack is best known in the blockchain space, but it goes beyond blockchain and can be used across database and cloud technologies. The Daml code used for the demo runs across blockchain and database platforms, which is a very realistic scenario given central banks will not use the same platform to build and deploy their digital currencies. 

Finally, we would like to make sure that when people are thinking about what a CBDC solution should look like, its functionality and how it should behave, they have the best technology in mind. Our technology stack is solving a few very hard problems, and this can influence the design and requirements of the CBDC solution

What other technology can achieve this interoperable state for CBDCs? 

APIs could be used to connect the various CBDCs using existing message-based technology. This approach will get the job done to some extent, but it lacks a few key components, atomicity and extensibility. In this scenario, how would you connect central bank digital currency into capital markets, into insurance, or supply chain? If you don’t tie in the system that handles the delivery part of the delivery versus payment process, then just having the payment part on the system doesn’t give you all that much. Let’s look at an equity transaction using CBDCs. In this example, neither the seller nor the buyer wants to start the settlement process first. The seller doesn’t want to give you the equity and wait for you to send the cash. Nor, do you want to front the cash and get the equity later. You both want the transaction to settle at the same time. In order to do this you have to be able to connect to the system that manages the positions of the equity, e.g., a central depository system, with the central bank system that manages the CBDC, and do this atomically across two different systems.

Is this interoperability example only applicable to CBDCs? If no, what other use cases could benefit?

This particular use case is specific to CBDCs, but it is only one example that showcases the power of Daml. The purpose of the demo and our example of an implementation is to show how an external application can interface with a CBDC. And, to clarify, we have not implemented any new Daml functionality to create the CBDC use case. This is just one use case where we are applying our technology as it stands today. This demo functionality can apply to any scenario where multiple parties and complex workflow across disparate systems exist. We picked CBDC because it’s a relevant topic and very timely as central banks are evaluating their technology options and building roadmaps for their potential CBDC launches. 

On the topic of technology evaluations, there are a number of key points central banks need to consider when choosing a technology to support CBDCs. We recently published a whitepaper explaining these key points in greater detail. 

What do central banks or enterprises need to build this functionality in-house? 

To get the most out of the open source code we recommend using developers that have some experience writing applications in Daml or willing to learn the language. We offer some tutorials and tips in our forum, called Daml Discuss or as part of the Daml Developer Certification exams.

When and where will the code be available? 

It’s available today. Visit Github – https://github.com/digital-asset/ex-cbdc – to download an example of a Daml powered CBDC network.

What do you hope market participants will take away from this use case?

My hope is that anyone working with CBDCs or building applications that need to interact with CBDCs, can use this functionality as a starting point to build their solutions. For Digital Asset, we also wanted to clearly demonstrate what a CBDC implementation requires to be successful. We believe with the interoperability properties we identified, CBDCs will realize their full potential. Without it, they could hit a dead end at the start. For us, CBDCs are a great use case to show the power of the technology and how it can streamline complex multi-party workflows across disparate systems. 

For more information about the technical ingredients behind a successful CBDC implementation, please download a copy of our latest whitepaper “Central Bank Digital Currency: Principles for Technical Implementation”, published in collaboration with Darrell Duffie, Graduate School of Business, Stanford University.

Download the Whitepaper

We Have Raised $120m to Tackle the Heart of the Problem

Today, we announced that we have raised a $120 million Series D. If you’re already familiar with Digital Asset, you probably know us as being one of the leading companies in the category known as “enterprise blockchain”, particularly for the high profile projects with major stock exchanges. For those of you following us over the last 6 years, you may think we take a side in the permissioned versus permissionless, public versus private, false dichotomies that make for a good story. However, what most readers probably don’t know is that our vision has far more in common with public chains than it has differences. But how we believe the world will get there is very, very different.

Across every aspect of our lives, our experiences are inhibited by inefficiencies. Some of these inefficiencies are immediately obvious, some we don’t notice until we have to experience them as they were before they were solved, some are buried so deep we can’t even identify them. But they are absolutely everywhere. You’ve experienced them first hand visiting the doctor, renting an apartment, or with something as simple as transferring money. The same information is inputted multiple times, kept up to date after the fact, you have to sign up for different services to transact with different people. Even in the era of mobile-first digital experiences, silos are everywhere.

These are possibly minor inconveniences. First world problems of modern life. But the same problems exist on a macro scale, a scale that can quite literally cause global financial crises. Now we’re not talking about individuals but institutions. One company doesn’t know what its obligations are, another doesn’t know what its risk exposure is. Compound that and we now face a scenario where incomplete information and fractured processes translate into major issues – where millions can lose jobs, and homes.

The root causes of both are the same. The underlying systems that the global economy is built on were designed in isolation with technologies that have no inbuilt notion of connectivity. Access control is spread throughout the stack, from the database through the middleware to the APIs, left to developers to implement correctly. Connectivity is bolted on after the fact as wrappers around antiquated systems or as predefined interfaces at the edges of applications, with no consistency guarantees in between. Even when data needs to be leveraged in other parts of the same organization, it often has to be replicated with no link back to the source.

And yet these are the foundations on which your entire business is built. Building new products and services, improving your customer experience, managing your risk, all rely on the data residing in these ultimate systems of record. You can’t be expected to innovate on a fragmented foundation. But your customers still demand seamless experiences. Regulators require changes to processes or disclosures that can have knock on effects throughout the data in your entire organization.

So what should you do about it?

So what do we replace them with? And more importantly, how do we go about doing it? Do we rely on the same outdated approaches that got us to where we are today? Do we start from scratch, relearn the painfully learned lessons and wait for everything to migrate? Do we create new, albeit slightly less inefficient silos around asset classes or groups of companies? Do we force everything onto a single ledger, whether that’s a single SaaS app, legal entity, or blockchain? Do we all have to hand over all our data to a third party to get a single source of truth? Could we have built the internet this way?

These are obviously rhetorical questions and yet so many approaches are doing exactly this. We can solve consumer payments if everyone uses Venmo, until someone is on CashApp. We can solve counterparty risk if everyone agrees to the same intermediary, until we need to use those funds to settle against a different asset class. We can make all assets interoperable, if only they all use the same blockchain. The world is far more complex than this. There is no one size fits all solution to combine everything under a single ledger.

So what is the way forward?

Our vision is for a world of countless systems, each powered by infrastructure that suits their own unique requirements. It may be something as simple as cash on a censorship resistant ledger. It may be an internal business process that doesn’t require decentralization and can be a collection of synchronized databases instead. It may be anything in between. They may need to connect for parts of the process that require the other. The properties that each require are not the same, and so they shouldn’t be forced to inherit them to interact. As long as they are truly interoperable, meaning it doesn’t matter what technology they are running on and no new single point of failure is introduced, it shouldn’t matter. This is how the internet, a network or networks by name, works, and so should the future of commerce.

We call this vision The Global Economic Network. ‘Global’ because it has no boundaries, geographic or logical, the edges of applications start to blur as they are composed into more complex systems. ‘Economic’ because it deals not just in replicating information but in value and scarcity; assets exist natively on the system and aren’t just moved around by it. ‘Network’ because it will not, and can not, be one instance or instantiations of a singular technology, but must be an interconnected group of heterogeneous technologies tied together with a common protocol.

But getting there is not simple. A vision is nothing without a plan. ‘Build it and they will come’ is not enough when dealing with the entirety of global commerce. Getting there requires a deep understanding of both where, and why, we are where we are today combined with a practical, incrementally valuable transition to get there. And the resolve not to let pragmatism distract us from where we need to go.

In part 2 we will explain how we believe we can get there, what we have already built to enable this vision, where we are going, and how you can contribute to, and benefit from, making it a reality.

Click here to read the press release.

Identifying the Right Technology for Your Multiparty Business Processes

Blockchain Technology Partners offers infrastructure choices for distributed, multiparty workflows. In this guest blog, Csilla Zsigri, VP, Marketing and Strategy at Blockchain Technology Partners explains the suitability of the various technology options.

Identifying the right technology for digitizing processes that involve multiple parties within and across organizations, has plagued businesses for decades. Information technology and operations executives are looking for the right technology to use for business-critical applications involving both trusted and untrusted parties. 

To help companies select a suitable technology for their multiparty workflows and distributed applications, we have created a simple decision tree, with three key questions to consider:

  1. Do you know and trust the participants in your business network?
  2. Does your business network have or need a trusted operator? 
  3. Do you need an immutable audit trail for your business process?

Let’s go through these questions, one by one.

Do you know and trust the participants in your business network?


If the answer to this question is ‘No,’ and your company seeks to interact and transact more efficiently – by eliminating frictions – with multiple untrusted organizations; a permissionless blockchain implementation will enable you and the other members in your business network to share information and collaborate securely, without a central authority, and with none of the individual parties having the ability to one-sidedly enforce decisions, either accidentally or in bad faith. 

Blockchain Technology Partners’ infrastructure management offering supports Hyperledger Besu, a core ledger protocol that combines both permissionless and permissioned features. Hyperledger Besu is essentially an Ethereum client that supports various consensus mechanisms, and its permissioning schemes were designed to be used in consortium environments. 

If the answer to this question is ‘Yes,’ then you should move on to the second question in the decision tree.

Does your business network have or need a trusted operator?

If the answer to this question is ‘No,’ you may consider the implementation of a permissioned blockchain network for running your multiparty business process. Hyperledger Sawtooth, in particular, offers a flexible and modular architecture, and supports various consensus mechanisms and smart-contract languages. Blockchain Technology Partners has released a freely available, enterprise-grade distribution of Hyperledger Sawtooth – dubbed BTP Paralos – ideal for production and business-critical environments. 

If the answer to this question is ‘Yes,’ then you should move along to the third and final question in the decision tree. 

Do you need an immutable audit trail for your business process?

If the answer to this question is ‘Yes,’ you may consider using a blockchain-powered distributed database technology that provides data integrity alongside privacy. Amazon QLDB, in particular, was designed to support transaction immutability offered by blockchain technology, yet it provides a centralized model to ensure data privacy.

For use cases where immutability is not required, but the ability to automate multiparty workflows is desired, a relational database such as Amazon Aurora coupled with a smart-contract capability such as Daml at the application layer, may be a suitable choice. 

When we are done with the decision tree…what then?

Overcoming shortages in IT skills and resources is one of the key challenges associated with digital transformation overall, and distributed ledger technology is no exception. With Sextant, Blockchain Technology Partners offers to free organizations from all the infrastructure pain involved with setting up and running a blockchain network, to ultimately enable them to build distributed applications and multiparty systems with ease. Distributed ledgers currently supported by Sextant include Hyperledger Besu and Hyperledger Sawtooth.

BTP’s Sextant also supports Daml, an open-source smart-contract programming language created by Digital Asset. Smart contracts have become known to the world as transaction protocols running on a blockchain or distributed ledger, embodying the self-enforcing business logic of a multiparty application or business process. Daml was purpose-built for coding complex multiparty business processes, and designed to work with different blockchains, distributed ledgers, as well as databases.

Blockchain Technology Partners and Digital Asset have teamed up to launch ‘Sextant for Daml,’ a joint, commercial offering that enables organizations to build and deploy smart contracts with little effort and no special expertise, on a variety of persistence layers. Sextant for Daml supports Hyperledger Besu, Hyperledger Sawtooth, as well as Amazon QLDB, Amazon Aurora and PostgreSQL.

For more information on Sextant, click here. Get in touch: daml@blockchaintp.com.

For more information on Daml, click here. Get in touch: sales@digitalasset.com.

Release of Daml Connect 1.12.0

Daml Connect 1.12.0 has been released on Wednesday April 14th. You can install it using:

daml install latest

Want to know what’s happening more broadly among Daml developers? Check out the latest Daml Developer Monthly.

Summary

  • Daml projects can now depend on packages deployed to the target ledger.
  • The Daml StdLib contains a new module DA.List.BuiltinOrder, which provides higher-performance versions of several functions in DA.List by using a built-in order rather than user-defined ordering. We observed up to five-fold speedups in benchmarks.
  • The Daml assistant now supports the Daml Connect Enterprise Edition (DCEE), which comes with additional features:
    • The DCEE has an Alpha feature to validate and store signatures on command submissions. This provides non-repudiation in cases where the Ledger API is exposed as a service.
    • The DCEE contains a Profiler in Alpha which allows you to extract performance information for Daml interpretation.
Profiler output is easily visualized in tools like speedscope

Impact and Migration

Some of the minor changes and bugfixes may require small actions to migrate:

  • Daml Triggers now use DA.Map instead of DA.Next.Map in their API. 
  • If you were directly targeting .dalf packages using data-dependencies, you now need to add --package flags to make the package contents available.
  • The Scala bindings now depend on Scala 2.12.13 instead of 2.12.12
  • The compiler now produces Daml-LF 1.12 by default. If you want to stick to the old default, Daml-LF 1.11, please add the build-option --target=1.11.
  • Some gRPC status codes on rejected commands were changed from INVALID_ARGUMENT to ABORTED.

What’s New

Remote Dependencies

Background

The data-dependencies stanza in daml.yaml project files lists the binary dependencies which the project relies on. Until now, these dependencies were file-based, requiring developers to specify a path to .dar files. Quite often the dependencies are already running on a ledger, in which case this required the developer to first get hold of a .dar file from the original source or by calling daml ledger fetch-dar. This new feature removes that extra step, allowing a Daml project to depend directly on a package already running on a ledger.

Specific Changes

Package names and versions, as well as package ID’s are allowed in the data-dependencies list of daml.yaml. These packages are fetched from the project ledger. The auto-generated daml.lock file keeps track of the package name/version to the package ID’s resolution and should be checked into version control of the project. For example, to depend on the package foo-1.0.0 running on a ledger on localhost:6865, your daml.yaml should contain:

ledger:
  host: localhost
  port: 6865
dependencies:
- daml-prim
- daml-stdlib
data-dependencies:
- foo:1.0.0

Impact and Migration

This is a purely additive change.

High-Performance List Operations

Background

A number of functions in the standard library for lists, DA.List, rely on the elements of the list being orderable. They use orderings defined in Daml through Ord instances and thus need Daml interpretation for every comparison. Since Daml-LF 1.11, Daml has a canonical inbuilt ordering on all values, which is considerably more performant than comparison on Daml-defined orderings. This allows the implementation of higher-performance versions of all of ordering-based algorithms. The new, high performance implementations are contained in a new module DA.List.BuiltinOrder.

As long as all orderings are derived automatically using deriving Ord, the results of the old and new implementations will agree, but the new version is substantially faster. If any values have user-defined Ord instances, these will be ignored by the new versions, hence the name BultinOrder.

Specific Changes

The Daml Standard Library has a new module DA.List.BuiltinOrder containing more efficient implementations of the sort*, unique*, and dedup* functions based on the builtin order. We saw up to five-fold speedups in our benchmarks.

Impact and Migration

This is a purely additive change.

If you don’t care about the actual ordering of values, but only that values are orderable for algorithmic use, we recommend switching to the new versions of the algorithms for performance reasons. 

Daml Assistant support for the Daml Connect Enterprise Edition

Background

Daml Connect Enterprise Edition (DCEE) is a commercial distribution of Daml Connect containing additional features particularly useful in large or complex projects. DCEE components have been distributed via Artifactory for several releases now. With this release, the Daml Assistant also becomes aware of the Enterprise Edition and is able to manage additional developer tools and components not available in the Community Edition.

This release also contains two new features in Early Access for the DCEE, described in more detail below.

Specific Changes

The assistant now supports an artifactory-api-key field in daml-config.yaml. If you have a license for DCEE you can specify this and the assistant will automatically fetch the DCEE which provides additional functionality. See the installation documentation for more detail.

Impact and Migration

If you are an Enterprise Edition user, we recommend adding the artifactory-api-key field to your daml-config.yaml to benefit from the new features. If you already have the Community Edition installed, run daml install –force VERSION after setting the API key to replace it with the Enterprise Edition instead.

[Enterprise Edition] Daml Profiler in Alpha

Background

For large and complex Daml solutions, the speed of Daml interpretation can have a significant impact on overall system performance. The Daml Profiler now provides a tool for the developers of Daml applications to extract the timing information of real-world workflows in the well-established Speedscope file format, and analyse the performance characteristics using standard tools.

Specific changes

The Daml Sandbox distributed as part of the Enterprise Edition of Daml Connect has a new command line flag --profile-dir. If set, the timings of the interpretation of every submitted command will be stored in a json file in the provided directory. These profiles are compatible with standard analysis tools like Speedscope.
Please refer to the documentation for a complete usage example.

Impact and MIgration

This is a purely additive change.

[Enterprise Edition] Non-Repudiation in Alpha

Background

There are many scenarios in which the Daml Ledger API is offered as a service, and Daml fully supports such deployment topologies through its multi-tenancy participant node design. With this new feature, the operator of a participant node that offers the Ledger API as a service to third parties gains additional security. The non-repudiation middleware captures, validates, and stores cryptographic signatures on command submissions, providing the operator with a strong audit trail to evidence that Ledger API clients did indeed submit commands matching the recorded transactions.

Specific Changes

Enterprise Edition users have access to a new runtime component on Artifactory. The component proxies the Daml Ledger API and validates and stores cryptographic signatures on any calls to the command submission services.

Please refer to the documentation for more details.

Impact and Migration

This is a purely additive change.

Minor Improvements

  • When running Daml Script on the command line you will now see a Daml stack trace on failures to interact with the ledger which makes it significantly easier to track down which of the call fails. By default, you will only get the callsite of functions like submit. To extend the stack trace, add HasCallStack constraints to functions and those will also be included.
  • The Daml Assistant now also allows the sandbox port to be configured via --sandbox-option=--port=12345 instead of --sandbox-port. Other tools like Navigator, the JSON API and Daml Script will pick up the modified port automatically.
  • The trigger library now uses DA.Map instead of the deprecated DA.Next.Map if the targeted Daml-LF version supports it. This is a breaking change: Code that interfaced with the triggers library using DA.Next.Map, e.g. with Daml.Trigger.getCommandsInFlight or Daml.Trigger.Assert.testRule, will need to be changed to use DA.Map instead.
  • The Scala 2.12 version of the Scala Ledger API Bindings now depends on Scala 2.12.13 instead of Scala 2.12.12.
  • The compiler produces Daml-LF 1.12 by default. LF 1.12 significantly reduces transaction size.

Bugfixes

  • gRPC status codes for inconsistency rejections and Daml-LF errors (ContractNotFound, ReplayMismatch) were brought in line with their intended meaning by changing them from INVALID_ARGUMENT to ABORTED.
  • DALFs in data-dependencies that are imported directly now require corresponding --package flags to make them visible. If you specify DALFs instead of DARs you also have to list all transitive dependencies, but typically you only want to expose your direct dependencies. Previously this was impossible. With this change, DALFs that are data-dependencies are no longer treated as main DALFs so you have more control over which packages get exposed.
  • The Scala Codegen now supports the generic Map type added in Daml-LF 1.11 properly. Previously there were some bugs around the variance of the key type parameter which resulted in Scala compile errors in some cases. See #8879.
  • A bug in the Daml compiler was fixed where passing --ghc-option=-Werror also produced errors for warnings produced by -Wmissing-signatures even if the user did not explicitly enable this.

Integration Kit

  • The implicit conversions between Raw types (and the conversions to and from ByteString) have been removed. You will need to explicitly convert if necessary. This should not be necessary for most use cases.
  • Added a test suite for race conditions to the ledger-api-test-tool

What’s Next

  • Despite the order of magnitude performance improvements we have already accomplished, this continues to be one of our top priorities. 
  • Improved exception handling in Daml is progressing well and expected to land in one of the next Daml-LF versions.
  • We are continuing work on several features for the Enterprise Edition of Daml Connect:
    • Working towards a stable release of the profiler for Daml. The profiler helps developers write highly performant Daml code.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.
  • A new primitive data type in Daml that allows arbitrary precision arithmetic. This will make it much easier to perform accurate numeric tasks in Daml.

Daml Developer Monthly – April 2021

What’s New

The anniversary of Daml’s open sourcing (“Daml Day”) was just a few days ago so happy Daml Day to our programmers, users, engineers, and all the wonderful folk that make Daml great!

Every quarter we make sure to recognize those users who went above and beyond in making Daml great; and we’ve just wrapped up our 4th community recognition ceremony, check out the winners and their contributions here! 

Want to skip reading and instead listen to this and earlier editions? Check out Richard’s podcast here.

Jobs

We’re still hiring for many positions including Engineering, Client Experience, Business Development, and Sales. If you even has so much of an inkling that a job is for you then make sure to visit digitalasset.com/careers to apply and share with your network.

We spotted a new Daml programming job in the wild from Plexus.

What We’re Reading and Watching

Some of DA’s most successful women shared insights on the triumphs and challenges of their careers at our latest DA-Versity webinar.

Ed released two top-notch posts showing us how to manage certificate revocation and harden our PostgreSQL for Daml deployments.

Lakshmi Shastry, Principal Solutions Architect at Brillio walked us through how they are using Daml to optimize clinical trials.

Simon showed us that upgrading smart contracts need not be daunting. His latest blog post demonstrates the Accept-Then-Publish pattern as one solution to this problem.

I started giving “When Daml?” talks which are the spiritual follow-up to “Why Daml?” where I dive deeper into the pros and cons of using private smart contracts (as opposed to those running on public permissionless blockchains). Unfortunately we didn’t get a recording of this talk but keep an eye out for future “When Daml?” events.

As always Richard’s weekly privacy and security news posts are jam packed with interesting stories from the always interesting world of cyber security.

Community Feature and Bug Reports

György got us to add more useful error messages for duplicate record fields.

Quid Agis caught a bug in the CSS on our Daml Cheat Sheet (it was missing) so we added it back, hopefully it stays this time.

Amiracam spotted a deprecated method being used in our quickstart-java template.

Joel found that some of our intro templates weren’t compiling, and now they are 🙂

Daml Connect 1.12 is out!

  • Daml projects can now depend on packages deployed to the target ledger.
  • The Daml StdLib contains a new module DA.List.BuiltinOrder, which provides higher-performance versions of several functions in DA.List by using a built-in order rather than user-defined ordering. We observed up to five-fold speedups in benchmarks.
  • The Daml assistant now supports the Daml Connect Enterprise Edition (DCEE), which comes with additional features:
    • The DCEE has an Alpha feature to validate and store signatures on command submissions. This provides non-repudiation in cases where the Ledger API is exposed as a service.
    • The DCEE contains a Profiler in Alpha which allows you to extract performance information for Daml interpretation.

The full release notes and installation instructions for Daml Connect 1.12.0 RC can be found here.

What’s Next

  • Despite the order of magnitude performance improvements we have already accomplished, this continues to be one of our top priorities. 
  • Improved exception handling in Daml is progressing well and expected to land in one of the next Daml-LF versions.
  • We are continuing work on several features for the Enterprise Edition of Daml Connect:
    • Working towards a stable release of the profiler for Daml. The profiler helps developers write highly performant Daml code.
    • Oracle DB support throughout the Daml Connect stack in addition to the current PostgreSQL support.
  • A new primitive data type in Daml that allows arbitrary precision arithmetic. This will make it much easier to perform accurate numeric tasks in Daml.

Tackling Counterfeit Drugs in the Global Pharma Supply Chain

Global sales for counterfeit drugs cost businesses billions of dollars per year. Drug counterfeiting affects human lives, business reputation, and return on investment for the entire pharmaceutical industry. According to the World Health Organization, it is estimated that up to 30% of pharmaceutical products sold in emerging markets are counterfeit, and about 1 million people lose their lives each year due to counterfeit medication.  

Lakshmi Shastry, Blockchain Architect at Brillio, explains the impact this issue has on the pharmaceutical supply chain, including recent regulations and what’s needed to address these industry challenges head-on. Lakshmi is also participating in a webinar hosted by Digital Asset on March 31st to discuss this topic with Guido Rijo, Vice President, Supply Chain Digital Transformation at Johnson & Johnson. Click here for more information and to register. 

The challenges

The pharmaceutical supply chain faces several challenges: numerous stakeholders with complex demands, lack of end-to-end process transparency, time sensitive and unorganized data.  The complexity of data quality only increases due to the number of internal and external stakeholders adding and changing data as drugs are researched, developed, and produced. It is difficult to monitor and validate information correctness while securing that information against human error and missing documentation. There is also the concern of opaque transactions. To date, manufacturers, logistics companies, wholesalers, and pharmacists have little to no visibility on the authenticity and quality of a drug in transit.

Recent Regulations

The US Drug Supply Chain Security Act (DSCSA) and the international Global Traceability Standard for Healthcare (GTSH) regulations are intended to protect consumers from counterfeit drugs. DSCSA requires pharma supply chain vendors to collaborate through an electronic, interoperable system that verifies a returned product’s authenticity before resale and track and trace all prescription drugs.

One of the core requirements of DSCSA is that every prescription medication must have a unique product identifier which takes the form of a 2D barcode. These federally mandated barcodes serve as foundational building blocks for a common data model.

Following the requirements set forth by the DSCSA, the FDA openly called for pilots to address three main challenges of the legislation:

  1. Establishing a product identifier
  2. Quality barcodes
  3. Achieving interoperability

High Level Architectural Considerations

A digital system is needed to securely record transactions across this complex multi-party supply chain network. Desirable properties of such a system include:

  • Create a definition of rights and obligations for each actor so that the data is tamper-proof, near-real-time and auditable.
  • Promote privacy and confidentiality of each party’s data.
  • Maintain visibility into a single version of truth,
  • Allow interpretability across the diverse technology stacks of each party engaged in the supply chain. 

This single version of truth and data integrity across parties will cancel out double counting and reveal possible instances of counterfeiting, diversion, spoofing, or man-in-the-middle attacks.   From a physical deployment perspective, we can realize such a solution in two ways.

  1. Hosted centrally by a trusted third party with every other party connecting into this centralized system using APIs.  While this model has been used for decades and is not new to us, it does have a major drawback in that every party has to maintain its copy of the data (or subset) for its operational processes. As time moves forward, the cost of constant reconciliations with this centralized system takes us back to the very problem we set out to solve. Such a solution also runs into regulations related to data domicile requirements, necessitating multiple data stores for such data and requiring associated reconciliations.
  1. The alternative is to leverage emerging technology such as Daml smart contracts that allow the creation of mutualized multi-party workflows by modeling each party’s rights and obligations. The output of these workflows is smart contracts that can reside on various data storage mechanisms ranging from a decentralized blockchain to traditional databases. Every party on the multi-party workflow can access the same real-time information even though the physical data may be in multiple locations to meet data domicile regulations. In this model enabled by Daml, individual parties do not need to maintain separate offline copies for their operational processes. 

Given the obvious benefits of the second approach above, we will outline it in more detail below. 

Business Process Overview

For those unfamiliar with Daml, it is an open-source, cross-platform smart contract runtime designed specifically for building distributed, multi-party workflows, and allows applications to work across multiple platforms with the ability for ledger interoperability. The Daml integrations, APIs, and runtime feature built-in safeguards that protect data integrity and privacy and help create an interoperable system in which multiple parties can connect, verify, and transfer pharmaceutical products with absolute trust in their authenticity, provenance and financial transactions. Daml helps as an augmented system with a Single Reference Smart Contracts Store, i.e., logical views of the same golden source based on confidentiality and access controls, multi-party shared common business process, and complete privacy between applications. Daml provides benefits beyond traditional technology, positioning users as the provider of choice with the given market.

Using Daml, all authorized stakeholders have transparency over the end-to-end drug delivery process. 

At each stage, a barcode is scanned and recorded onto a smart contract, which rests either on a blockchain or a series of connected databases managed by Daml interoperability. These records create the audit trail for the drug journey.  It can track every delivery, with the delivery driver traced through biometric measures. Every checkpoint involving the drug can be measured and recorded through several tools. It can also incorporate sensors into the supply chain with temperature or humidity recorded onto the ledger system. With a drug fully tracked from creation to patient, the supply chain becomes a holistic, accurate, audited, and secure process.

Smart contracts enable shared workflows, real-time information flow, and transparency with the ability to extend into the value chain, bridging silos within and between enterprises and reducing risks. End-to-end traceability and tracking enables trust among all involved parties for product integrity, item level fidelity, prompt recalls, incident investigations, dispute resolutions, and compliances across complex pharma supply chains. 

An initiating counterparty specifies contractual conditions, such as a required label with federal mandated 2D barcode, that must be adhered to by all custodians on the supply chain. At any point, i.e., if the device takes a temperature or humidity measurement that is out of range, the smart contract state will update to indicate that it is out of compliance, recording a transaction on the blockchain/database and triggering remediating events downstream.

Pharma Supply Chain Process

Using Microsoft Azure as the Underlying Persistence layer

Microsoft Azure combined with Daml offers multiple deployment topologies.  

While the centralized record can be stored on one AzureDB using the Daml for AzureDB on PostgreSQL Driver to streamline initial change management, the architecture allows for a future model where each party can then host its own “node” (either an AzureDB or blockchain node) to maintain additional physical privacy and compliance with data domicile requirements. The Daml smart contracts platform automatically manages the multi-party workflow across these individual “nodes” of each party. That is the interoperability property of Daml that creates a network of individual networks, each using their physical storage and applications technology stack.

The Microsoft Confidential Consortium Framework (CCF), a multi-party compute framework that leverages secure enclaves, can also be used to deploy the Daml multi-party workflows. Using CCF enables private and highly performant transactions that can execute with throughput and latency issues similar to a centralized database. The deployment topology operates like a blockchain system without data privacy concerns.

From integration to monitoring, network configurations, smart contract development, privacy and high-performance computing capabilities, Microsoft Azure powers a data-driven approach in digital supply chains for tracing, tracking, and verifying goods.   

Brillio’s three-step approach using Microsoft AzureDB and Daml is to:

  • Build a multi-party network using a rights and obligations model.
  • Simplify governance and management respecting each party’s technology choices.
  • Integrate the solution with existing systems and tools to reduce IT roadmap complexity.

Significant features of such a network can support more flexible confidentiality models, enable control over which authorized party’s transactions can be revealed, and improve energy efficiency with simplified proof-of-work and proof-of-stakes algorithms.

Conclusion

The solution architecture we outlined in this blog increases the pharmaceutical drug supply chain’s provenance, reduces counterfeits, and ensures compliance with regulations. Leveraging the Daml integration with Microsoft AzureDB and Internet of Things (IoT) technology increases quality compliance and visibility for temperature-sensitive biologics drug logistics. The solution provides a ‘chain of custody’ for pharma supply chain lifecycle management.  

The solution can integrate with external consuming applications for the extension of services that currently require intermediaries. These external processes include insurance, legal, brokerage, settlement services, delivery scheduling, fleet management, freight forwarding, and connectivity with business partners.

Over time, the architecture outlined above allows for the creation of a roadmap where business transactions (e.g., automating payments and transferring ownership between parties) can be onboarded to the core multi-party rights and obligations model. As this model evolves, it can also address the complexity of the CAR T-cell therapy supply chain where a patient is also part of the chain, and both information privacy and supply chain integrity needs to be maintained. 

Brillio’s Daml for Azure model provides a foundation to create such digitized workflows shared across supply chain business stakeholders, authorities, agencies, and ultimately consumers. Consortium-based applications ingest signals from relevant user interfaces and communicate with consuming apps of businesses across the consortium.

Please connect with me or join our upcoming webinar on March 31st to see how a combination of Daml, IoT, and cloud can overcome current supply chain challenges and power the future of supply chains. 

Join the Webinar

Release of Daml Connect 1.11.1

Daml Connect 1.11.1 has been released to fix a few bugs, namely:

  • An issue with the JSON API’s websocket heartbeating, which was causing trouble for some using this functionality.
  • Ledger Pruning was supposed to become a generally available feature with 1.11.0, but we missed to update the documentation to that effect and didn’t include it in the release notes. With 1.11.1, we are now retro-actively declaring Ledger Pruning stable on Ledger API 1.10.
  • There was a bug in CommandService and client bindings, which could cause problems in situations with a lot of timeouts.

You can install 1.11.1 using:

daml install 1.11.1

Complete release notes for the 1.11.x release can be found here.