Interconnected / September

interconnected-2

The not so secret weapon

 digital-asset-daml-september
We’ve been quietly working with companies across the world to help them leapfrog their competition with a tool designed for the job. This month, we welcome SGX, BNP, Temasek, HSBC, oh, and the official blockchain network of China to the DAML ecosystem.

There’s only one way to build distributed applications without worrying about the underlying infrastructure and, with the help of our partners, now you don’t have to worry about managing Corda Flows either. Focus on your business logic, pick the best infrastructure for your needs, and get to market first with DAML.

Read the Full Newsletter

Running DAML App on Multi-Participant Ledgers

Running DAML App on Multi-Participant Ledgers

What is a “multi-participant” ledger?

DAML ledgers can have various topologies. For this blog post, we are interested in ledgers with a multi-participant topology. A participant is a node that serves the Ledger API to a set of system parties, and hence a multi-participant topology is a topology with multiple participant nodes. The different participants will typically serve different sets of a system’s parties. You can read more detail about DAML ledger topologies and participants in the documentation.

Why multi-participant ledgers?

Multi-participant ledgers are quite common, for example, DAML on Hyperledger Sawtooth and the Canton ledger interoperability protocol both use multi-participant topologies. The diagram below shows an example multi-participant topology on Canton:

example multi-participant topology on Canton
example multi-participant topology on Canton

In this diagram, system parties connect to participant nodes that serve the ledger api. The participant nodes connect to domains. Each Canton domain is a DAML Ledger instance that facilitates Canton’s synchronization protocol between participants.

Different implementations vary in their reasons for using multiple participants, but in general there are several potential advantages to such a topology. Some of these advantages are:

  • Privacy: Typically, participants only know about the portion of the ledger relevant to their parties. A party can prevent this data being stored by an untrusted entity by running their own participant node, or picking a trusted participant node.
  • Scalability: Having multiple participants can improve the throughput of a ledger under certain workloads, as work can be shared between participants.
  • Reducing trust assumptions: The participant(s) hosting a party act on behalf of a party; other participants don’t. Under some implementations, participants do not trust each other so a party only needs to trust its hosting participant.

Example of running “create daml app” on Canton

Running applications using multiple participants can add complexity, particularly during development when you may want to host multiple participants from the same machine. In the rest of this post I’ll describe how to run the create-daml-app example application on two participants simultaneously, using Canton as our example multi-participant ledger. Before trying this out, I recommend that you follow the DAML getting started guide.

For this tutorial we will use the following dependencies:

  • Canton 0.18.1
  • DAML 1.5.0
  • The dependencies specified here

The following diagram shows the overall topology we’re going to set-up in the remainder of this post.

Overall topology we’re going to set-up in the remainder of this post
Overall topology we’re going to set-up in the remainder of this post

Start by installing Canton. Unpack the archive to a location of your choice, for example `~canton-cda`, and `cd` into the unpacked archive.

Next, fetch the `create-daml-app` example. Use the exact command given below.

Build the DAR and install the dependencies for the UI with the following set of commands:

Return to the directory where you unpacked Canton, and start Canton.

This will start two participant nodes: we are using a multi-participant topology. It will also allocate two parties. `Participant1` will host the party `Alice`, and `Participant2` will host Bob. Leave the Canton console running — we’ll need it later.

We’ll now start two different instances of the UI that will run concurrently, one for `Alice` and one for `Bob` . To connect the UI to a participant’s Ledger API we need to use the HTTP JSON API, so we need an instance of this for each participant. 

In a new terminal window, start the HTTP JSON API for Participant1:

And also start the HTTP JSON API for `Participant2` in another terminal window:

We can now start the two UI instances, again in two new windows. For `Participant1`:

And for `Participant2`:

This will open a UI talking to `Participant1` (`Alice`’s participant) on http://localhost:3000/ and a UI talking to `Participant2` on http://localhost:3001/. But unfortunately we can’t just log-in using the strings `“Alice”` or `“Bob”` because Canton uses a more complicated representation of party IDs. We can return to the Canton console to lookup Alice and Bob’s party IDs: 

So, in this example, we can log-in as Alice with the ID `Alice::01691ae95af923195517af5de7cd551d8a52015943d2640b98290af5cb8c5d33fb`and as Bob with `Bob::01dede5ea97789e9965eb94532b12fe4b83cf06c5571c20787d794e936c0b2df0a` (Your IDs will differ). 

Use these to log-in to the two UI instances. It should look something like this: 

UI instances
UI instances

Admittedly the IDs are a bit long, but it’s otherwise fully functional 😀. Alice and Bob should be able to follow each other: the different participants should be able to interact with each other using the Canton interoperability protocol. You can find a more detailed version of this tutorial in the Canton documentation.

Also, if you are interested in interoperability, Anthony Lusardi presented a Cross-Platform Interoperability DEMO on that at the Hyperledger Global Forum 2020 in Phoenix:

proxy voting in DAML

Streamline proxy voting and regulatory reporting using DLT

Since the very beginning of IntellectEU’s partnership with Digital Asset, corporate actions have been a consistent theme (DAML Driven Development: IntellectEU) for using DAML as a tool to express and tackle multiparty workflows. DAML’s capabilities, well-recognized through its value proposition, have helped IntellectEU revamp business approaches, while also reshaping their technical implementations. These efforts have led IntellectEU to convey processing efficiencies on the aforementioned use cases and/or value-added solutions to a large set of clients.

As part of IntellectEU’s product research and development strategy, it was decided again to tackle corporate actions. In this case, IntellectEU focused specifically on the existing challenges in proxy voting with respect to the “Shareholder Rights Directive II”, also known as “SRD II”, and corporate governance events.

Background

Shares of listed companies are often held through a complex chain of intermediaries (the “holding chain”), which complicates the exercise of shareholder governance rights and may act as an obstacle to shareholder engagement. 

Message-based approach for Shareholder Identification Request
Message-based approach for Shareholder Identification Request

Companies need better ways to communicate with  shareholders. The message-based approach shown above leads to multiple challenges during  the exercise of shareholder rights. Take for example the general meeting broadcast and the voting process. The inefficiencies in the meeting notification and proxy voting transmitted throughout the holding chain lead to multiple points in which  information distortion can occur, impose more conservative voting deadlines than are actually required, require multiple controls to be put in place to verify votes cast against positions held, and offer almost no control possibilities to shareholders to ensure their votes have been properly enfranchised and counted during the general meeting. These challenges are described in further detail below.

Challenges

Existing workflows dealing with these topics present various challenges to the parties involved. The items below provide a more detailed overview on the effects of the prevalent message-oriented process. 

  • Risk of information being lost or distorted as information/meeting notifications and subsequent relative modifications take time to cascade up-and-down from the issuer to shareholders and may potentially require translation to and from local languages.
  • Conservative deadline for proxy voting notification, especially from intermediaries down the holding chain, even after the regulatory changes that no longer require shares to be blocked for trading.
  • Shareholders have no control pertaining to the effective counting and direction of their vote due to the lack of communication and the fact that many intermediaries still opt for Omnibus accounts and cast their vote in bulk, thus hiding the shareholder and their respective vote from the company.
  • Cost multiplication occurs due to the number of parties involved in the cascade model that follows a message-based approach (using SWIFT), as well as the requirement for setup of numerous SLAs between all parties, including within the same institutions that play different roles during the voting process (they can be both global or local custodians).
  • There is either no reconciliation that takes place or there is a cumbersome and costly reconciliation of shareholders entitlement at the global level. This results in an increased risk of “over voting” or “under voting.” When discrepancies are found, it requires multiple exchanges of communication (emails and SWIFT messages), cancelation of voting instructions, and subsequently casting new voting instructions between account holders and account servicers.

The Exercise

This exercise implements a single place of “communication” between the Issuer and Investor (beneficial owners) built on top of a distributed ledger. This approach eliminates a large amount of processing complexities and challenges resulting from the holding chain and message-based communication. As an example, under the proposed solution, the issuer may wish to approach the Central Securities Depository (CSD) with Shareholder Identification requests. Notwithstanding, through the relationship disclosure contracts, the CSD no longer needs to cascade through the custody chain to identify shareholders.

DLT-enabled Shareholder Identification Request
DLT-enabled Shareholder Identification Request

A first workflow is implemented to set-up the governance event (meeting event or information event) as a shared record (“the golden record”) providing all the information to be spread to beneficial owners. It involves the Issuer and the Issuer’s agent required to gather and structure the information to be spread and to define the agenda on which voting might be required. The shared record will be made accessible to the whole network including dedicated media providers (such as Bloomberg, Thomson Reuters, etc.) once confirmed by the Issuer.

If a vote is required, a second workflow will drive the voting process. Here it would be noteworthy to pull it together with a system capable of maintaining a registry on the beneficial ownership of the shares. Its relevance would be the added capability to automate the calculation of the voting rights at the beneficial ownership level. 

Under the focus of the exercise it is assumed for simplicity that the CSD has knowledge on the custody chain for a voting party and is able to input its entitlement position directly into the system. Through DAML, this was achieved by mandating the voting party to disclose the custody chain upfront. However, before that disclosure is done the voting party must first execute a set of straightforward initiate-accept DAML patterns to onboard each of the entities, referred in the code as partners, involved in the custody chain.

The “PartnerProposal” acceptance leads to the creation of the “Partner” contract where the CSD is given the right to approve the partner into the system. This intends to emulate the execution of the CSD’s due diligence towards that entity.

Lastly, as part of the terms of the “ApprovePartner” choice exercise, such entities or partners are individually associated according to their role onto the “CustodyChain” contract. Having this contract setup the CSD becomes capable to leverage this information on-demand. This means that the CSD is now able to communicate more effectively the entitlement position throughout the custody chain.

Apart from the simplicity with which it is possible to model such workflows with DAML, it is noteworthy to reference that DAML was also leveraged in order to enable an immutable audit trail of the workflow processing steps, irrespective of the ledger of choice. Moreover, it guarantees greater transparency on the voting process and will provide sufficient evidence of vote counting. Voting deadlines can be improved since the entitlement will be available as part of the golden record kept on the ledger. Smart contract validation can automatically prevent shareholders from “over voting”, meaning that reconciling votes will be easier and less costly for all of those involved in the process.

Governance event and proxy voting workflows
Governance event and proxy voting workflows

By improving and simplifying workflows, reducing time and friction, shareholders gain more time to analyse the impacts of their vote and cast a more conscious vote on each item of the general meeting’s agenda.

Furthermore, through the usage of DAML, stakeholders involved in this process are capacitated with further flexibility on governance models. This flexibility results in it being relevant for a central participant to push for its implementation on a centralized ledger, under the control of the trusted third party. On the other hand it is just as pertinent for a consortium to follow a more decentralized solution following the same workflows. Under any of these options the synchronization and control of the workflow steps is guaranteed by DAML and data segregation achievable through DAML’s Ledger API.

From both perspectives, the current holding chain is enhanced through such a system, in which beneficial owners can have direct and instant access to the governance event shared record and through it be empowered  to vote on their position. 

Benefits

The above mentioned solution translates into a range of positive impacts for entities affected by current practices for governance events and proxy voting. The proposed platform constitutes a competitive proxy-voting solution and excels on the points mentioned below in a cost effective way. The advantages include:

  • Direct communication is established between the issuer and shareholders. Shareholders have access to the detailed meeting information and the required company information, in line with SRD II. No more information distortion, no more information delays.
  • The deadline for voting instructions is both standardized and improved for all shareholders and no longer depends on point-to-point communication which worsen SLAs. 
  • Transparency pertaining to the access to voting results is immediate and shareholders can access the system to confirm that their vote has been taken into account as per the given voting instructions.
  • Reduced operational risk is achieved by global entitlement reconciliation and control can be performed in one place before authorizing the vote to avoid over or under voting.

Conclusion

As DAML’s value proposition has been evolving, IntellectEU has kept up with the pace by delivering applications that leverage its multiparty workflows modelling features. During this time, IntellectEU has delivered on its efforts to also advance the capabilities to support and enable a continuous stream of new DAML ledger integrations such as R3 Corda, Hyperledger Fabric and SQL databases.

In closing, the joint partnership between IntellectEU and Digital Asset continues to move forward in enabling clients to surmount the information silos of an ever connecting world. Learn more about our partnership here and check out their solution on the DAML marketplace:

Proxy Voting Solution

How to Monetize your Data at Scale

Guest post from Zohar Hod, CEO of One Creation


Throughout my long career, I have always been able to hone in on the one thing I knew I could do well, maybe better than others (so humble ), and that is turn data into money. Although admirable, my experience working for large and small data vendors helped me achieve and fine tune this skill. Data vendors rely on their clients’ data contributions and constituents, after which they make deals and exchange money. It is a basic notion centered around two key points – a contract and trust between parties that they will uphold the terms of the agreement.

The ability to share data peer-peer (P2P) has always been the holy grail of business. This ability could provide amazing efficiencies in terms of cost and quality of data sets, as well as new data sets and business insights that overlap across sector ecosystems. Data vendors might be afraid of this changing paradigm, but there is nothing to fear, only opportunity. New technologies are creating numerous opportunities to enable better data ecosystems. When combined with major data vendors’ trusted distribution environments, we will have platforms that can take data-driven business models to the next level.

It’s no secret that new technologies offer a mechanism to share and sell data both P2P and at scale. Let’s face it, data is “the new Oil or Gold or….” and there is so much of it. Companies are sitting on petabytes of business insights that are worth a fortune. So why aren’t companies monetizing data for their organizations?

The main reason is that most companies don’t know where to start. Even more problematic is the primary issue around sharing data at scale and its security. Without the ability to control your data, the process of monetization cannot begin. What does data control mean?

For us at One Creation, data control is about

1) Data Mobility – The ability to move your data from one vendor or client to another with ease. Similar to moving your phone number from one mobile provider to another, this has massive implications that are beneficial both to individuals and enterprises.

2) Usage Control – The ability to track and protect your data even after it has reached your clients. Could establishing a succession of trust through the data supply chain be done? Yes, using a unique combination of access control tools and new automated bot infrastructure, data lineage can be achieved.

3) Data Leakage – Data vendors spend millions of dollars yearly to audit your usage of their data, even if you’re contributing data to the same vendor. It’s an uncomfortable part of the relationship and costs the vendors and the owners of the data billions in annual losses. Most data leakage is not malicious and is usually just part of a legacy workflow of distributing data that was not intended for that type of use. Hence, if we could control all the honest actors, data owners would benefit tremendously.

4) Data Abuse – This is the last pillar of how we define data control. One does not need to go far to see the daily abuse we are subjected to when it comes to sensitive data. Google abuses our data even when using their Incognito mode, and other platforms that sell our data with or without our explicit consent are just some ways our data is abused. In fact, (a slightly self-promotional moment), we created and named our company One Creation because we are committed to the motto that there can only be one original creator for your data and that is you!

Let’s continue the discussion
Are you ready to start monetizing your business insights and creating new and exciting revenue streams while maintaining full control of your data? We can help!

Join One Creation and Digital Asset on September 22nd at 10 AM ET for a live panel discussion about controlling, tracking and monetizing data. In this session, we will outline what you need to know about data control and what’s necessary for companies to achieve data monetization at scale.

Click here to register for the webinar.

Not available on September 22nd? Get in touch for a one-on-one discussion and demo.

GBBC Davos 2020 – Blockchain, smart contracts, and sustainability

“Outcompeting Destructive Systems” In Partnership With Odyssey

“Driving sustainable growth in the 21st century will require new solutions to replace systems that have contributed to so many contemporary civic and environmental challenges.”

In January 2020, Co-Founder and CEO Yuval Rooz spoke at the World Economic Forum in Davos, re. how new technologies and DLT can combat pollutive and destructive practices in key sectors, and help realize UN Sustainable Development Goals.

Conversation Lead:
Rutger van Zuidam, Founder & CEO Odyssey.org

Panelists:
Kavita Gupta, Lecturer, Stanford University; Managing Partner, Katapult.

AI Sanjay Poonen, Chief Operating Officer, Customer Operations, VMWare

Rod Beckstrom, Former President & CEO, ICANN; Founder & CEO, BECKSTROM

Yuval Rooz, Co-Founder & CEO, Digital Asset

repl or daml script in daml studio

DAML Script – Scenarios 2.0

Developers benefit greatly from a fast feedback loop. The less time you have to wait between writing code and seeing whether it does what you expect the better. DAML Studio provides immediate feedback for compiler errors and warnings in your code. Scenarios allow you to test your templates against an emulated ledger and can be run directly from within DAML Studio providing you with live updates as you change your code.

Scenarios provide a great way to test and develop your DAML models. However, scenarios do have important shortcomings: They do not interact with an actual ledger. This means that you cannot use scenario code to initialize arbitrary ledgers, for automation or to test other ledger clients, such as your UI or DAML triggers. Scenarios also allow you to do things that are not possible on an actual ledger while on the other hand, they are missing functionality that ledgers do provide (e.g., querying all active contracts). This means that it is easy to design DAML models that are usable in scenarios but do not expose the necessary APIs to use them via the Ledger API.

Scenarios provide a great way to test and develop your DAML models. However, scenarios do have important shortcomings: They do not interact with an actual ledger.
Scenarios provide a great way to test and develop your DAML models.

Because Scenarios allowed operations that are simply not possible via the Ledger API, there was no way forward for them that didn’t involve fundamentally breaking existing Scenarios. So a more powerful replacement is needed.

DAML Script – One API to rule them all

DAML Script addresses the shortcomings of scenarios by providing you with an API that provides more features,but also imposes the same restrictions as the Ledger API, and is fundamentally designed to work against it. Unlike Scenarios, we designed DAML Script to be a universal scripting language for DAML so we started with the most restrictive use-case: Since SDK 0.13.55 you have been able to use DAML Script to run a script in a DAR against an actual ledger and lots of people have used it successfully for ledger initialization, automaton and testing.

We designed DAML Script to be a universal scripting language for DAML so we started with the most restrictive use-case

However, you still had to rely on scenarios to get the interactive experience in DAML Studio. Since the APIs for scenarios and DAML Script are different (for good reasons since as mentioned before, scenarios provide things that cannot be executed against actual ledgers), it was not easily possible to share code between your scenario test cases and DAML Scripts which resulted in a lot of code duplication and made it harder to develop DAML Scripts due to the lack of interactive feedback.

Now in SDK 1.5.0 this is changing! In this post we are happy to announce two recent additions that improve the interactive development experience even further, and complete our quest to offer a more universal alternative to Scenarios: DAML Script in DAML Studio, and the DAML REPL. With these new features, DAML Script now not only allows you to interact with a real ledger but it also provides you with the interactive development experience you know and love from scenarios all within a single language.

DAML Script in the IDE

DAML Scripts are now executed in DAML Studio providing you with the same functionality that you know and love from scenarios. You don’t have to enable anything for this. Write a script and you will see the “Script results” button that you can click on to see the ledger state as well as the transaction view. The functionality for inspecting the results is identical to scenarios. With this change, you can now rely on DAML Script exclusively for interactive feedback in DAML Studio, unit tests executed via `daml test`, interacting with an actual ledger via the `daml script` command as well as using it interactively via `daml repl`.

You can now rely on DAML Script exclusively for interactive feedback in DAML Studio

If you have used scenarios this will look very familiar but you might notice a few differences:

  • script instead of scenario
  • allocateParty instead of getParty.
  • createCmd and exerciseCmd instead of create and exercise.
  • queryContractId instead of fetch.

The last item in particular highlights the difference between scenarios and DAML script. The fetch function is only available within Updates and is not available on the ledger API. When you are converting scenarios to DAML Script that contains more complex Updates then you may need to define new templates and choices that allow you to execute this code on the ledger.

You can also use the exact same DAML Script and run it against an actual ledger with the `daml script` command. You can find the full example at

DAML REPL

In SDK 1.4..0, we introduced the new DAML REPL which allows you to use the familiar DAML Script API interactively. This is great for one-off tasks, interactive exploration of a ledger but also for exploring pure code without any ledger, e.g., to better understand the DAML standard library. You can start daml repl anywhere and start typing! If you are interacting with the JSON API, you might also enjoy the new functionality in SDK 1.5.0 to display the JSON representation of a DAML value.

you might also enjoy the new functionality in SDK 1.5.0 to display the JSON representation of a DAML value.
You might also enjoy the new functionality in SDK 1.5.0 to display the JSON representation of a DAML value.

If you would like to interact with your contracts in DAML REPL then you can start it in your project, connecting against a ledger using the --ledger-host and --ledger-port flags. Following the example from above

Try it out and if you have any feedback or questions ask them on our forum!

Ask questions in the forum

Release of DAML SDK 1.5.0

DAML SDK 1.5.0 has been released on September 16th 2020. You can install it using:

daml install latest

There are no changes requiring migration. However we do recommend using DAML Script in favor of Scenarios going forward. Learn more about Script in the notes below.

Interested in what’s happening in the DAML community and broader ecosystem? If so we’ve got a jam packed summary for you in our latest community update.

Highlights

  • DAML Script is now fully integrated into the IDE and will supersede Scenarios.

Impact and Migration

  • There are no changes requiring migration.
  • We recommend using DAML Script in favour of Scenarios for any new testing and scripting needs. It is more versatile and will supersede Scenarios.

What’s New

DAML Script in the IDE

Background

DAML Script was designed from the ground up to be “Scenarios 2.0”: A single scripting language for DAML for all use-cases. To make sure Scripts could be used truly universally, the first covered use-case was scripted interaction with live ledgers, introduced with SDK 0.13.55 in March 2020. DAML Repl and Triggers added interactive tooling and reactive automation. This latest release now completes the journey by providing the same real-time IDE integration that Scenarios have always offered, as well as adding a few more Script features that ease migration.

Scenarios offer functionality that cannot be offered via the Ledger API, which made it impossible to extend them into what DAML Script now is without breaking backwards compatibility. They are also in such widespread use that breaking backwards compatibility on Scenarios would have been costly. The two features will therefore live in parallel for a while, but Scenarios are going to be deprecated in an upcoming release, which means they may be removed with a major SDK version from 12 months after the date of deprecation and will not receive new features.

For more details on DAML Script and its role as “Scenarios 2.0”, please refer to our latest blog post.

Specific Changes

  • DAML Scripts are now run in DAML Studio just like scenarios. The functionality for inspecting the results is identical.
  • Add a script function to aid type inference. This is the equivalent of scenario.
  • In DAML Studio, you can now set the ledger time to the past. This is not supported when running against a ledger.
  • Add a queryContractId function for querying for a contract with the given identifier. This offers the Scenario functionality of submit p do fetch cid in a way that’s consistent with Ledger API use, but also allows for negative results by returning an Optional.
  • Add passTime helper to advance the time by the given interval.
  •  archiveCmd cid as a convenience wrapper around exerciseCmd cid Archive.

Impact and Migration

Given Scenarios’ widespread use and the fact that Script does not have complete feature parity with Scenarios, Scenarios will go through the proper deprecation cycle:

  1. They will be deprecated in an upcoming SDK version, and from SDK 2.0, the SDK will emit warnings on Scenario use.
  2. They may be removed from the next major SDK version 12 months from the time of deprecation.

Given DAML Script’s additional capabilities and the fact that Scenarios will not receive new features after deprecation, we recommend developing any new tests or initialization scripts using DAML Script instead of Scenarios.

If you would like to migrate existing Scenarios, please refer to the migration guide. If you are not sure how to migrate your Scenario to DAML Script, please get in touch with us.

Minor Improvements

  • You can now configure the application id submitted by DAML Script, REPL, and Triggers to the Ledger API using the  --application-id or --participant-config command line flags. This is needed if you are working against a ledger with authentication and need to match the application id in your token.
  • You can now convert DAML expressions to JSON in the DAML REPL using the meta-command :json. For example: :json [1, 2, 3]. This allows you to test how the JSON API would convert to JSON.
  • The maximum size for packages can now be configured independently in the JSON API. The  optional --package-max-inbound-message-size command line option sets the maximum inbound message size in bytes used for uploading and downloading package updates. It defaults to the max-inbound-message-size setting.
  • The DAML Standard Library has gained
    • a new function visibleByKey, together with improved  documentation on the authorization rules of the various byKey functions.
    • a Show instance for Ordering.
  • The @daml/react can now fetch a contractId using the useFetch hook.
  • The DAML Engine has had significant performance improvements:
    • foldl and foldr are each four times as fast as before
    • Record update expressions of the form R with f_1 = E_1; ...; f_n = E_n are much more efficient.
  • New docs pages giving further detail on the ordering guarantees DAML Ledgers give on events, and how the theory can be extended to span multiple infrastructures.

Early Access Development

  • DAML Trigger Service
    • Parties are now specified in request bodies as opposed to via HTTP Basic auth. This is done in preparation for running the trigger service against authenticated ledgers.
    • The database format has changed to allow migrations in future releases.  Databases are always initialized or migrated to the current version on start, so use of --init-db is no longer required. See issue #7092.
    • The --address option can be used to listen for HTTP connections on interfaces other than localhost, such as 0.0.0.0 for all addresses. See issue #7090.

Bugfixes

  • The fetch by key streaming endpoint (/v1/stream/fetch) of the JSON API had a bug where streaming multiple keys could silently ignore some of the given keys. This feature was not used by the @daml/* client libraries so you were only affected if you subscribed to the websocket endpoint directly.
  • The JSON API no longer returns offsets before the initial active contract set block. This matches the documented behavior that the first message containing an offset indicates the end of the ACS block.
  • The --application-id command-line option on the JSON API is now hidden and deprecated. The JSON API never used it as it uses Application ID specified in the JWT.
  • A bug in the Extractor that could cause transient contracts (created and archived within the same transaction) to be shown as active was fixed. See issue #7201 for details.
  • Calling the off method of the @daml/* client libraries’ event streams returned by streamQuery and streamFetchByKey now correctly remove the given listener.
  • The Scala bindings’ maxInboundMessageSize option in LedgerClientConfiguration was fixed. It previously only set the maximum size for the metadata. maxInboundMessageSize now does the correct thing, and a new option  maxInboundMetadataSize was added to set the maximum metadata size. These names match the Netty channel builder.
  • The Scala bindings’ client was sometimes not closed properly on application shutdown. You may have seen some RejectedExecutionException errors in your logs if this has affected you. This is now fixed, but we encourage all users of the LedgerClient to call the close method explicitly to ensure it is closed at an appropriate time in your application lifecycle.
  • withTimeProvider was removed from CommandClient in the Scala bindings. This method has done nothing since the new ledger time model was introduced in 1.0.0.  See issue #6985.

Integration Kit

  • When running the Ledger API Test Tool, the required DAR files are now uploaded to the ledger automatically before running the tests. You no longer need to upload these DARs before running the test tool.
  • kvutils now expects execution contexts to be passed into the various SubmissionValidator, LedgerStateAccess, and LedgerStateOperations methods. This is a source-breaking change. Instead of providing an execution context implicitly to your ledger implementation, you are encouraged to construct the relevant contexts for different operations explicitly. Please refer to the open-source implementations as a reference.
  • We have enriched the contextual information exposed by the Ledger API server. You should note richer logging information, which can be read either via unstructured or structured logging frameworks. A paragraph on how to configure structured logging has been added to the docs. For more on the issue, see issue #6837.

What’s Coming

We are continuing to work on performance of the DAML integration components and improving production readiness of DAML Ledgers, but there are exciting features and improvements in the pipeline for the next few releases as well.

  • The Trigger Service will reach feature completion and move into Beta
  • The authentication framework for DAML client applications (like the Trigger Service) is being revisited to make it more flexible (and secure!)
  • The build process for DAML client applications using the JavaScript/TypeScript tooling is being improved to remove the most common error scenarios
  • DAML’s error and execution semantics are being tidied up with a view towards improving exception handling in DAML
  • DAML will get a generic Map type as part of DAML-LF 1.9

Community Update – September 2020

As mentioned last month we’re turning our former announcement post into a community update. Below you’ll find everything our community has been up to over the past month.

Update: 1.5.0 has been released and you can read the full release notes here.

What’s New in the Ecosystem

Firstly we will be holding two community open door sessions for the 1.5.0 RC, one for US-based timezones and one for APAC. Register here for APAC/Europe morning timezones and here for US/Europe evening timezones. Both will be on September 14th so go signup! 📝

We might be a lot about DAML but we’re not *just* about DAML. Our biggest piece of news this month is that @cocreature just handed over maintenance of ghcide to the Haskell community, and they’re super-excited. This has major benefits for both Haskell and DAML as more maintainers will lead to more useful features in Haskell and DAML IDEs. Check out the rest of the details here. 👷‍♀️👷‍♂️

Our second Community Recognition Ceremony has kicked off. We do these quarterly to make sure our community members get the recognition they deserve for all of their wonderful contributions. Nominate who you think deserves to win here

György showing off the hoodie he won in the first Community Rewards Ceremony

Block8 just started a 4 part series comparing the upsides and downsides of DAML vs. Java on Corda. It’s a seriously explosive 🧨 review covering state, transactions, ease of development, testing, and functionality. 

@bartcant shows us how to use DAML to manage medical testing data while guarding patient’s privacy. 👩‍⚕️👨‍⚕️

@Shaul, our mostly fearless CTO, broke down how scaling works on DAML ledgers. 🚀

@gyorgybalazsi took a deep dive 🤿 into the DAML Finance library in the second post of his DAML master class. Covering big ideas like semantically rich identifiers, modularity, and multistep settlement chains. Few have explored the depths of the finance library like György has. György also found a bug 🐜 where Ordering didn’t have a Show instance, so now it does. Thanks!

We learned Exberry is using DAML and project:DABL to power ⚙️ the backend for their exchange infrastructure. Pretty cool.

@Amy_Ahmed , @andreolf@talia.klein@Felix_Kam@ManishGrover, and myself (@anthony) spent the last month mentoring students and participating at the hackbfs.com ideation-a-thon. It’s been a great experience being able to share our knowledge with the next generation of builders. 👨‍🏫👩‍🏫

@ManishGrover showed us how to improve customer experiences with smart contracts. 😁

And in case you missed it @dliakakos and @cocreature gave a webinar on what DAML Triggers are and how to write them. The full video 📹 is here.

Release Candidate for DAML SDK 1.5.0

The preliminary release notes and installation instructions for DAML SDK 1.5.0 RC can be found here.

1.5.0 RC Highlights

  • DAML Script is now fully integrated into the IDE and will supersede Scenarios.
  • Turn DAML expressions into JSON using the REPL (ex. :json [1, 2, 3]) which is useful for talking to the JSON API.
  • DAML on SQL now has much richer logging which means error messages will be a lot more transparent, allowing you to see exactly what call data caused an error. We’re also adding support for structured logs via Logstash Logback Encoder.
  • foldl and foldr performance has improved by 4x! 4x faster folds IN. ANY. DIRECTION. YOU. WANT.
  • Application IDs now work in DAML Script, Triggers, and REPL. Useful if you’re working with ledgers with authentication.
  • The Trigger Service can now bind to addresses other than localhost.
Script working in the DAML IDE just like Scenarios

What’s Coming

We are continuing to work on performance of the DAML integration components and improving production readiness of DAML Ledgers, but there are exciting features and improvements in the pipeline for the next few releases as well.

  • The Trigger Service will reach feature completion and move into Beta
  • The authentication framework for DAML client applications (like the Trigger Service) is being revisited to make it more flexible (and secure!)
  • The build process for DAML client applications using the JavaScript/TypeScript tooling is being improved to remove the most common error scenarios
  • DAML’s error and execution semantics are being tidied up with a view towards improving exception handling in DAML
  • DAML will get a generic Map type as part of DAML-LF 1.9

DAML Entitlements - smart contracts language

DAML Entitlements – Are they really needed or one more access approval request to raise?

If you are reading this article somewhere you came across words such as Entitlements, Access management, or authorized access. Did you ever wonder, why you can access some information but you get Access Denied for others ? Answer to this question lies within your   entitlement, it’s simply your ability to access privileged information. You may be privy to sensitive information based on a variety of reasons such as your role in the organization, involvement in the process, or simply by being a part of the larger organization. Each and every access that you have contributes towards entitlement and vice versa.

However, there are certain times when these entitlements are not enforced correctly and a particular person gets access to a certain level of information that he/she should not be privy to. This could be the result of poor entitlement management systems, improper approval processes or lack of governance around access management. In such cases, a company’s organizational risk increases and if necessary actions are not taken in time, it can put the company’s reputation at risk.

In order to combat various problems around privilege accesses and entitlements as a whole, an organization should adopt a holistic approach that encompasses privilege access management,  segregation of duties, and periodic access reviews. These measures along with the principle of least privilege access provide a stronger foundation for Identity and Access management, and help maintain appropriate entitlements organization wide, while safeguarding against external threats.

principle of least privilege access provide a stronger foundation for Identity and Access management
Principle of least privilege access provide a stronger foundation for Identity and Access management

As technology evolves and our reliance on it continues to increase, there is a need for an overall solution to combat the problems with existing Identity and Access management solutions. More so, a solution that provides coverage for user access requests from an organization’s entitlement strategy, supports tiered approval process, reconciles the entitlements, and reviews and recertifies existing accesses. Together these criteria form the three pillars of the successful IAM solution and offer various benefits such as strong technology adoption, a centralized repository for access, and reduced risks from emerging technology.

Distributed Ledger Technology (DLT) presents a potential solution but it doesn’t provide us with the required verifiability and semantics on its own. However, by using DAML, the open source smart contract language for rights and obligations, entitlements are handled right out-of-the box.DAML offers immutable smart contracts with defined user roles such as signatories, observers, and controllers and their respective actions while defining a contract. This sets the precedence for segregation of duties, access management and also takes care of the approval process. Moreover, any changes to the existing contract or the role definition would inherently require approval from all the participants.

Here is an example of entitlements using DAML where we have defined three users: developer, lead, and a manager. We have also defined roles for each of these users as per segregation of duties guideline wherein we adhered to the fact that if a developer has started writing a code, any review should be done by a manager or someone other than the developer who started the code. As DAML is very easy to follow  and understand, this enables greater participation from other people across an organization while creating DAML-driven entitlements.

At Digital Asset, we take the security and access controls of the sensitive information very seriously and therefore we incorporate best in class Data and Access Control principles while designing all the contracts. Moreover, our DAML technology offers full extensibility to these features and makes it easy to adopt Access Control principles while developing an application.

DAML offers increased transparency and accuracy as it works off a single real time source of truth. This in turn removes ambiguity and eliminates the need for costly, duplicative reconciliations amongst systems. DAML-driven solutions are very flexible and support regulatory change and drive industry standardization with enterprise-grade solutions that reimagine or improve complex multi-tiered entitlement processes. All in all, clearly defined roles and obligations, combined with fine-grained permissions, ensure that information is shared with those who need to know it, when they need it and how they can act on it. DAML has also a new learn section where you can begin to code online:

Learn DAML online

Handing over ghcide to the Haskell community

Handing over ghcide to the Haskell community

We started building ghcide over 2 years ago as part of our efforts at Digital Asset to provide a great IDE experience to all DAML users. As a lot of people in the Haskell community know, the DAML compiler is based on a modified version of GHC, so naturally the people working on it were Haskell programmers. As ghcide got better and better, and we got to experience it whenever we were working on DAML codebases, we started missing the same features from our Haskell development. Over time, we started abstracting over DAML-specifics and eventually managed to get it working on some hand-crafted Haskell projects, but couldn’t integrate with Cabal, Stack or other build tools. Around that time, we also open sourced the whole DAML codebase which included ghcide – but we didn’t announce ghcide anywhere. Luckily, at this time Matthew Pickering released hie-bios which made it trivial to integrate ghcide with all of the various build tools in the Haskell system while still using the same codebase as the basis for the DAML IDE.

External interest in ghcide was very limited until Neil gave a keynote at MuniHac 2019.

Interest and external contributions skyrocketed over a few days and we split ghcide into a separate repository to make it easier for contributors and users that no longer had to deal with the DAML monorepo. Since then, there have been a huge number of contributions to ghcide that not only benefited Haskell users, but also DAML users. It is impossible to name all the amazing features that have been contributed to ghcide, but some of our favourites include:

  • Support for code completions
  • Many significant performance improvements
  • Multi-component support
  • A wide variety of code actions

In January 2020, a lot of ghcide contributors and contributors from haskell-ide-engine got together and agreed to unify their efforts as part of haskell-language-server – which builds on top of ghcide while carrying over all the features and plugins that came from haskell-ide-engine. Half a year later, it’s not a stretch to say that this endeavour has been an enormous success and enthusiastic statements from users can attest to this. Finally, most efforts on Haskell tooling are focused on a single approach and everyone can benefit from improvements.

As the importance of ghcide to Haskell tooling has grown, keeping such an important repository in the Digital Asset organisation with only Digital Asset employees as maintainers has started to hold back the flood of contributions. Therefore, in consultation with many of the contributors, Digital Asset has made the decision to turn ghcide into a proper community project under the haskell github organization. In making this transition, ghcide gains a much wider team of maintainers, sourced from the current active ghcide contributors. The DAML IDE will switch to an open source fork of ghcide and cherry-pick changes from upstream as needed and we will continue to upstream bugfixes and improvements into ghcide.

We are very excited to see in which directions ghcide will grow in the future. Thanks to all ghcide contributors and in particular to Pepe Iborra, Matthew Pickering, Alan Zimmermann, wz1000, Luke Lau, Alejandro Serrano, fendor and Jacek Generowicz. We’ve enjoyed being part of the Haskell IDE tooling community, and look forward to working with both current and new contributors in the future.

Moritz and Neil

Bristol Haskell Hackathon 2020