Python and DAML complement each other well

Programming Smart contracts – A look into Python & Daml

As smart contracts become increasingly mainstream, we wanted to discuss how Python and Daml can be used together to develop multi-party and private, yet distributed applications of the future. 

We will cover briefly the background of both languages, put them in the context of the smart contracts driven distributed ledger (DLT) paradigm, and then present symbiotic usage scenarios. Despite the title, this is not a Python vs Daml post. But it shows where both languages specifically shine and how you can use them to create enterprise applications. 

Python and Daml
We discuss how Python and Daml can be used together to develop multi-party and private, yet distributed applications

What are smart contracts?

Smart contracts are digitally executable agreements that encapsulate, express, and respect the privacy, rights, and obligations of their stakeholders. They can be composed into business workflows, allowing all participating parties a shared, verifiable version of the truth. The term was originally coined by Nick Szabo in 1997. Daml is such a language, which can run on distributed ledgers and databases alike. It is in current use by both leading enterprises and startups to create complete, end-to-end applications that bridge data- and process-silos.

A background of Python and Daml

Python is an interpreted, object-oriented, high-level programming language suited for Rapid Application Development. We can consider Python a general purpose language with a wide variety of libraries and modules to accomplish a myriad of tasks. Most recently, Python has become popular for AI and machine learning due to the abundance of ready-made modules that make a data scientist’s job easier.

Daml on the other hand is not a general purpose language. Rather, it is a strongly typed language purpose-built for smart contracts and end-to-end business workflows. Daml includes native constructs to express authorization and disclosure. Thus every entity in the workflow is guaranteed to see only data that they’re authorized to see, while at the same time being assured that their view of the world is consistent and coherent with everybody else’s. 

This segregation of data is enforced at the level of the persistence layer. In cases of relatively low trust, such as between potentially competing enterprises with no trusted intermediary, when combined with a ledger that supports privacy Daml will enforce privacy via its ledger model. On the other hand, when the trust is relatively high, such as between different desks or departments within an enterprise which are supported by a single trusted operator, then a database can sufficiently enforce differentiated disclosure.

Where do Daml and Python fit in?

A few weeks ago, Harvey West from Everis published a blog post evaluating Daml and Kotlin. We’ll use that as base and build on it as we review Python and Daml. You may also want to review this comprehensive blog post by Manish Tomer comparing Daml and Solidity, both smart contract languages.

Most enterprise-grade business applications use a 3-tier architecture to support an end-to-end business process. We observe that, typically:

  1. A business process that spans multiple business units means that no one unit completely knows the state of the business flow. This requires reconciliation between each business unit. (aka “data silos”.)
  2. Such a business process is governed jointly by multiple applications, and as such, there is not one single codified expression of the entire business process; really, it is just an emergent behavior of the system. At best, the whole process is described in supplementary documents in prose, of unknown fidelity to the actual process. (aka the “business process management problem”).
A business process that spans multiple business units means that no one unit completely knows the state of the business flow. This requires reconciliation between each business unit.
Daml smart contracts enable these silos to be bridged

As a smart contracts language, Daml is well suited to solving these two problems by 1. Providing a coherent state for the workflow that respects the privacy requirements of the different business units, and 2. Explicitly defining the business process by precisely enumerating all of the legal state-changes that make up the workflow.

Python, on the other hand, is known to be good at expressing business and computational logic from the point of view of a single actor or process, making it a perfect compliment to a Daml ledger. 

In other words, write the rules, constraints, protocols, rights, and obligations of the business process in Daml, and use Python to write applications that make decisions and take actions within that workflow.

Python, on the other hand, is known to be good at expressing business and computational logic from the point of view of a single actor or process, making it a perfect compliment to a Daml ledger.
How Python and Daml together enable the enterprise of the future

Can Python be used to write smart contracts?

I.e. why not use Python in place of smart contracts? 

Firstly, we must concede that since Python is a general-purpose language with a comprehensive set of libraries, in principle it could do all of the things that a smart-contract language can do. But is it really suited to the job? 

Let’s take an example of a mortgage Contract between a Borrower, an Issuer, and a Lender. In this simplified example, the Issuer is responsible for verifying the borrower and his creditworthiness and underwriting the contract; the borrower received the money and is responsible for paying it back; and the lender has the right to collect:

This short snippet of Daml accomplishes all of the following functions: 

  1. Require the explicit consent of the borrower and the issuer; 
  2. Restrict visibility to only the three parties listed on the deal;
  3. Grant the right to the lender (and only the lender!) to collect; and allows the borrower to make a payment

Additionally, it can do all of this, as written, on a number of blockchains and databases, while serving an API to applications. 

As we conceded above, yes, it would be possible to write a Python module that accomplished all this (save, perhaps, for platform independence). But let us speculate on what such a Python module would look like. Its first notable feature would be that it would need to duck-tape together multiple libraries and paradigms, to

  • Manage the persistence layer, e.g. using SQL or a blockchain protocol
  • Authorize and identify parties
  • Ensure and prove that consent was granted by signatories
  • Ensure that only authorized parties can change the state
  • Ensure that mutually contradictory attempts to change the state fail elegantly (e.g. prevent double-spends)
  • Serve an API to an application.

…in other words, it would take a lot of lines and would likely work only on a single persistence layer.

A second consequence of this is that the developer would spend the majority of his time not writing business logic but rather writing non-differentiating systems code.

What’s Python good for, then? 

Daml is good at defining and enforcing the rules of a business process, as above. But it was deliberately designed to not be an actor in such a process. For example, in the snippet above, Daml does not describe how a mortgage should be created, or when, or why, only that it needs to be signed by the parties specified. And it certainly does not specify what the user interface for entering a mortgage is, nor the integration with e.g. the underwriting system. 

Luckily, the Daml runtime serves an API based on the business logic defined via Daml (called the “Ledger API”). In the example above, the Ledger API will: 

  1. Notify the lender, borrower, and issuer (and only them!) whenever a Mortgage Contract is created;
  2. Notify these stakeholders in case the contract is archived; 
  3. Allow the lender (and only the lender!) the ability to collect. 

Python is an ideal language to use this API, and to connect to other upstream and downstream systems that offer their own connectors. Python features a wide range of transformation and serialization tools that could be used to translate into and out of Daml contracts, which makes Python very appealing for writing integrations to other systems within an enterprise. 

For example, the following automation will:

  1. Listen for the creation of contracts of type ’DamlTemplateName’ to which the bot is a stakeholder; 
  2. Issue a REST request;
  3. And exercise a choice called ‘DamlContractChoice’ on that contract based on the REST response. 

So Daml and Python work hand in hand to execute a complete business process while ensuring that the complete state of the business process is in a single smart contract store.

In addition, event driven architectures as outlined in this blog by Eric Saraniecki are well supported by Python for Daml. 

Some important properties of Daml not addressed by Python

The communities behind Daml and Python support different objectives that are relevant to the paradigm they are intended for. 

The Daml community focuses on ensuring that Daml can run without changes across any persistence layer, including distributed ledgers or databases. That’s called Daml portability, and ensures that developers looking to write smart contracts need not worry about which ledger or database their distributed applications will use to create a single version of the truth across parties. Portability is enabled by the Daml runtime that abstracts the underlying data storage layer from the smart contracts layer. 

Portability is enabled by the Daml runtime that abstracts the underlying data storage layer from the smart contracts layer.
Portability is enabled by the Daml runtime that abstracts the underlying data storage layer from the smart contracts layer. 

The Daml community also focuses on ensuring that Daml applications can work across ledgers. So an application deployed on Hyperledger Fabric may transact atomically with a Daml application that is hosted on a database such as PostgreSQL. This is called Daml interoperability.

Daml applications can work across ledgers

In contrast, the Python community focuses on making sure Python provides a comprehensive array of libraries that allow it to be used for almost any application purpose from data science to desktop GUIs.

Python provides a comprehensive array of libraries that allow it to be used for almost any application purpose from data science to desktop GUIs.
Python provides a comprehensive array of libraries that allow it to be used for almost any application purpose from data science to desktop GUIs.


As should hopefully be evident now, Python and Daml complement each other well.

Daml is a domain specific language (DSL) for writing the smart contract layer, while Python is well suited for applications which need to enact changes of state on the ledger, including integrations or analytics. Daml provides important built-in constructs for privacy, rights and obligations, ledger portability, and also ledger interoperability, all of which ensure that those changes of state are coherent and safe. Daml has also a new learn section where you can begin to code online:

Learn Daml online

Enhancing Corda with Daml Smart Contracts

IntellectEU and DataArt now support Daml smart contracts on Corda Blockchain

We’re really excited that two of our partners, IntellectEU and DataArt, just announced they’re offering commercial support for Daml integration with R3’s Corda! Now, in addition to supported platforms like VMware Blockchain, Hyperledger Fabric, and AWS Aurora, you can deploy your Daml applications (unchanged!) to the open source distribution of Corda and Corda Enterprise, including deploying to existing Corda networks.

Learn More about Daml and Corda Blockchain

Corda is a widely used, enterprise-grade distributed ledger with adoption in a whole host of vertical markets. Corda comes in two flavors, commercially supported Corda Enterprise from its creators, R3, and Corda Open Source, the free community edition. Daml is a popular and flexible framework for building connected applications for all kinds of use cases. It hides all the messy platform details so developers only have to think about the business solution.

Why Daml and Corda?

Daml and Corda enable customers to develop applications that span markets both securely and privately. Both Daml and Corda have been designed from the ground up around the principles of only sharing information with those with a need to know, making them a natural pairing that promises to turn into a long and happy marriage.

The Daml on Corda integration is ready for customers today and we’ve chosen to be thoughtful about how we release it to the market. To this end, we’ve partnered with a couple of innovative systems integrators with world class expertise in both platforms – IntellectEU and DataArt.

IntellectEU have been long friends of Digital Asset’s. They were early adopters of Daml and we’ve worked together on a number of projects over the years. They’re also close partners with R3 and have a tremendous amount of Corda expertise. They were an obvious choice for a launch partner for Daml on Corda.

DataArt came to Daml later, but has embraced the technology enthusiastically. They were early DLT proponents and have years of experience in trying numerous ways to write distributed applications! DataArt brings a wealth of cross industry expertise to the partnership, covering healthcare, retail, media, logistics, and our home base of capital markets.

Today, the story we want to tell is about our customers – companies who have been clamoring for a way to develop applications in Daml and run them on the Corda platform. This is now a reality: anyone with a Corda Enterprise or Corda Open Source network can install Daml on Corda, deploy their Daml application, and get support under tight SLAs from expert vendors.

We’re really excited about the possibilities Daml on Corda can bring to the market. Anyone with an existing Daml application can deploy it to a Corda network without rewriting a single line of Daml code. One of Daml’s core promises is the ability for developers to write their apps in Daml without having to worry about what ledger to use until later. Now that Corda is a first-class option that decision might be a bit easier.

Find out more about Daml and Corda

loan application that must be approved by a lender looks like this using DAML

How to Streamline Lending and Credit Management using Smart Contracts

Lending processes are a complex mesh of customer management, risk assessment, collateral management, payments, and default management functions. Additional complexities also arise from the fact that multiple institutional parties are almost always involved, and that loans and their servicing can be sold or transferred. 

Although solutions have been proposed using blockchain, complexities within the enterprise have largely been unaddressed, viz. streamlining intra-enterprise processes and data islands. 

In this blog, we would like to propose a roadmap that helps with the immediate problems of constant reconciliation, transparency, and turn around time within an enterprise, while also paving the way to achieve the goals of collaboration across multiple organizations while preserving privacy and eliminating reconciliation. We will also present DeCredit, a solution that embodies these principles and provides a useful accelerator for enterprises looking to undertake this transformation. A special thanks to Manish Grover from Digital Asset for his valuable contributions and expertise.    

The Framework 

Presently, participants collaborate and manage data & processes in silos. They do this by often duplicating business rules and data stores. For example, within an organization, collections must be provided loan information to kick things off and then updated periodically. Similarly, in larger matrixed organizations, underwriting and risk departments maintain their own data islands which must be constantly synced. Digital interactions are often stored in their own islands. 

Syncing these multiple islands requires data aggregation efforts and process orchestration which in turn often leads to costly operational delays, process breaks, and disrupted customer experiences. 

When we consider interactions and data across multiple organizational parties, all of these challenges are magnified. 

Note that “how” we exchange data between applications and organizations does not matter. For example, APIs and file transfers are common methods. Both of these on their own do not solve the problem of multiple data islands and the need for reconciliation.

For smaller organizations or those with a focused set of offerings, the issues with data duplication and processes can be mitigated to a certain degree by using packaged Lending Management Software. However, no business operates in a silo. Growth also necessitates constant change. So much of the investment that can be used for innovation is spent trying to manage basic data and process resiliency. In addition, there is decreased transparency and visibility into the end-to-end process for both institutional participants and the users themselves. For example, customers must create and manage multiple identities, lenders must manage KYC processes individually, credit agencies must ensure they receive the right data at the right time, and so on. 

We think it’s useful to think of streamlining processes and driving innovation in the below 3 step model:

Roadmap for adoption of smart contracts to streamline lending
An illustrative Roadmap for adoption of smart contracts to streamline lending

A smart contract-based approach using Daml (that can be run on both blockchains and databases) solves these immediate challenges and makes the lending process more efficient. The privacy aware, multi-party collaboration within and across enterprises that Daml enables reduces costs, paves the way for better analytics and AI, and provides the foundation for rapid business innovation. Daml has also a new learn section where you can begin to code online:

Learn Daml online

Our goal is to make the lending process easier and more secure through a platform where legal agreements can be established, physical touchpoints can be integrated, and relevant transactions can be updated to a shared ledger in real-time while maintaining confidentiality as needed. 

This will be true for stakeholders within a single business entity, and also for multiple stakeholders from different organizations who need to collaborate on the underlying customer account or lending asset. For example, third-party organizations such as those that maintain and provide credit scores, manage collateral, provide legal services, and those that bridge the physical-virtual divide. 

While such a smart contracts platform itself can be operated in a direct model where no intermediaries are needed, we note that many implementations may indeed designate a centralized and trusted operator of the network for management and administration – such as a governing organization within an enterprise. In addition, parties who provide services will need to be onboarded in the right way to promote confidence, regulatory compliance, and support to customers. 

A few principles we started out with can be summarized as below. They apply to both decentralized lending as well as streamlining lending processes within an enterprise.

  1. Participating institutional parties and business divisions will have their individual data confidential
  2. Parties will not have to execute duplicate business processes and reconcile data
  3. Regulatory reporting and compliance will be baked in
  4. Operational rules and data semantics can be changed only after mutual agreement by parties thus addressing reconciliation challenges
  5. Parties will be able to separate their competitive advantage (pricing, products, analytics, customer experience, etc.) from the underlying plumbing 
  6. Consumers will be in control of their identity and data (optional but this trend is growing stronger and being made possible everyday)

These principles allow us to solve for the immediate pain areas in the industry first, while also paving the way for a fully decentralized model.  

Key Benefits of a Smart Contracts based Approach

By using Daml, the open-source smart contracts language from Digital Asset, we were able to achieve adherence to most of the above principles out of the box. For example, only the parties who have been defined as signatories and who have rights on the contracts have access to the confidential data. We did not need to layer on any additional plumbing. Multiple business process areas such as origination, repayments, collections etc, can be parties on the process, having access to only the data that is relevant to them, while an audit trail is automatically maintained.

Duplicate business processes arise in cases such as KYC where every institution must perform the validation for every new customer relationship. These costs can be collectively reduced (subject to the regulatory environment of course) using smart contracts. A KYC record can be made available upon request to a new entity without divulging the competitive nature of the previous customer relationship. (e.g. Lender A can request KYC but should not know about a customer relationship with Lender B). In fact, this is an area that can very well be positioned as the first milestone in the roadmap of decentralized lending that involves multiple organizations.

By onboarding parties such as credit agencies on to this network, significant overhead and errors with data reporting and privacy breaches can immediately be resolved. In addition, the costs of making this credit data available to consumers and other institutions upon request can also be much simplified. These agencies can simply be made observers on Daml smart contracts designed specifically to divulge specific data required for that purpose. Given that the same agency may support many organizations, synergies in data integration and technology plumbing can be realized quickly by using a smart contracts platform such as Daml. No additional actions need to be taken for reporting a new loan, or repayment status.

The connection from the virtual to the physical world is an important and practical part of a solution for the foreseeable future. A purist approach that only deals with digital transactions will not take us far. We can do this by onboarding trusted parties that hold the system of record or even intermediaries who act on their behalf to provide such services. Designing the smart contracts platform in this manner provides scalability of adoption, operational savings through automation as well as much needed accountability and transparency. For example, an inspection of collateral may need to be done offline, or a physical document may need to be brought into the system.

Finally, we acknowledge that not all systems can be integrated with such a credit platform. This could be because a system also performs other functionality, technological challenges, geographical constraints, or simply because of complex business change management considerations. We can observe this by looking at most enterprise technology landscapes today which are far from monolithic but have evolved through application integration. Fortunately, Daml allows for interoperability between systems. Such a decentralized credit application can execute transactions atomically with Daml models deployed by any of these non-participating entities. So while the primary platform may run on a smart contracts platform these external parties can continue to rely on traditional databases so long as they build interfaces using Daml smart contracts that speak to their legacy worlds. 

DeCredit – Decentralized lending platform

Let us now talk about a platform that Knoldus built on Daml that outlines the above principles. In addition, we took the approach of making this solution ready to try and adopt. Our aim behind creating this application was to provide users with a way to carry out day-to-day transactions not only easily but also to give them a sense of security by recording the transactions on a decentralized ledger. 

DeCredit is a Daml-powered decentralized loan lending application backed by digital collaterals in a peer-to-peer network. The borrower in need of funds can create a profile on the platform and initiate a loan request by setting one of their digital cryptocurrencies as collateral. DeCredit also supports other types of collateral that reside offline. The lenders can check existing loan requests and, based on the risk assessment, propose the amount and a desired rate of interest. The borrower can then choose from among the various proposals received and select the one that suits their needs, at which point the disbursement process starts.

DeCredit allows for the use of cryptocurrency as collateral because that’s easiest to validate and secure digitally without requiring intermediaries. However, this collateral model can be extended as outlined in the previous section. The use of Daml allows for end to end flow transparency, and easier regulatory reporting and compliance.

We also made use of the project:DABL to deploy DeCredit. Using project:DABL allowed us to deploy a public-facing app quickly without having to worry about authentication, performance, and security. project:DABL completely abstracts away the underlying persistence layer so we can focus on the Daml ledger model and the rights & obligations of various parties. 

Simplified representation of origination workflow
Simplified representation of origination workflow

For example, a loan application that must be approved by a lender looks like this using Daml. As you can see, using Daml simplifies the development dramatically while also allowing the business users to participate actively in the development process.  

We were also able to add dashboards such as the below by pulling data from smart contracts. For more complex requirements, the data can be retrieved into an offline system (subject to Daml privacy and disclosure rules) where visualization and advanced analytics can be performed. 

Quick view of the Borrower Dashboard
Quick view of the Borrower Dashboard


Lending processes that integrate smart contracts technology can realize tremendous operational and business benefits such as straight-through processing, easier regulatory reporting, simplified credit data management, removal of duplicate processes, and excellent user experience. 

But we must consider the practical challenges of business change management and offer an achievable, incremental roadmap to adoption. We hope we have been able to demonstrate that in this blog. Starting with addressing intra-enterprise data islands, we moved to a trusted network where multiple organizations can participate and eliminate reconciliation, and finally we showed how to make this process completely decentralized with the right governance and regulatory framework built in. Using Daml allows us to achieve this roadmap rapidly while providing flexibility of deployment layer – blockchain for inter-enterprise, and DB for intra-enterprise. In addition, current applications can be integrated, not replaced, further reducing the complexity of the IT roadmap.

If you would like to discuss how to drive efficiencies in your lending process and technology portfolio using smart contracts, please get in touch with us at Knoldus. Our reference application DeCredit is available on the Daml marketplace and is available for demo upon request. Daml has also a new learning section where you can begin to code online:

Learn Daml online

Join us on the webinar August 4th where we discuss lending. Register here

Lending Innovations using smart contracts Webinar. Register here

Decentralized Honesty – Habit Tracking with Daml

Against the backdrop of The Situation, many of us have been trying to turn it into opportunities when we can. For me, this meant ramping up my fitness levels. We at Digital Asset went fully remote in March 2020, and at the start of April, I committed myself to foster good habits and dropping bad habits, starting by working out every single day and eating healthily.

That lasted about two weeks.

Since then, I’ve been off and on the wagon, managing to keep myself running and lifting weights for a week or three, only to lapse for a few days. Sticking to a habit has always been difficult for me (and everyone else), so my partner and I have been throwing around ideas for apps to keep us motivated and focused.

Of course, being a Daml junkie, once the ideas had taken root, my next thought was, “How do I model this with smart contracts?”

Over the course of the exercise, I learned about implicit trust assumptions in my code and how Daml makes it convenient to describe workflows that keep things honest. But I’m getting ahead of myself. Let’s start from the beginning.

Modelling habits

On paper, a habit-tracking system might look like this:

A habit-tracking system on paper

The simplest design I could think of was to model the recording:

This grew a little once I realized I needed to be able to query for the habits themselves in the UI. They became their own contract template:

Now, when I record my exercise, I do my best to track it every day. I don’t want to try and remember whether I did some push-ups on Tuesday; chances are I’ve forgotten and will just have to make something up. By limiting myself to only tracking today (or yesterday, I guess), I keep myself honest.

The obvious place to put this logic was a choice:

But I had a problem. I started asking myself: “what’s stopping me from just creating a Recording with create, rather than exercise?” And the answer, “nothing”, wasn’t one I was happy about.

Back to pen and paper

When stumped with a problem in Daml, it’s often helpful to go back to the real world, and think about how we’d solve the same problem there. (This is why we often use paper money to explain the concepts behind Daml and smart contracts.)

I want Daml to help me keep myself honest. But just like on paper, Daml assumes we can trust ourselves. So, if I wanted some help making sure I do the right thing, how would I do this with a calendar on my wall?

After phrasing the question like that, the answer was obvious. I can’t cheat at tracking my exercise, because my partner would notice and call me on it. That logic currently lives outside the system, but with a small tweak, I can re-architect my calendar to represent this:

Re-architecture of my calendar

Modelling honesty with multi-party agreements

Once I had it on paper, it was obvious how the model needed to change. I need to represent the party keeping me honest in the system, and make them a counter-signatory on every recording. This changed my workflow from an agreement with myself to a multi-party agreement, where Daml shines.

I needed a two-phase flow for creating the habit, and I’d also need to create the same flow for a recording.

As you can see, Alice now needs a signature from Bob to create a recording, fulfilling our goal of keeping ourselves honest (with help from a friend).

This is under development, and I’m now working on the UI. You’re welcome to check out my progress on GitHub.

What I learned: trust in real-world distributed systems

Daml makes these trust relationships explicit. This may be a simple example, but it forced me to encode the approval process in my household that I had taken for granted. By doing so, I gained some power over the process, and I’m now able to make changes to it to better fit my needs in the future.

The concept of trust isn’t one we think about with simple applications such as this one, but it’s there nonetheless. In a conventional web app, the job of accepting or rejecting my recordings would fall to a centralized server (or cluster of servers, running identical code), which would have similar logic as above. 

However, there are plenty of use cases for blockchain or distributed ledger technology, where relying on a central authority is dangerous. For example, if the server goes down, I can’t track anything at all, potentially losing my 100-day streak. (This almost happened to me last week with Duolingo.) Or perhaps I don’t agree with the rigid requirements set by one authority, and I’d rather work with a different one, with a different set of constraints.

Daml offers us a set of tools to solve problems like this explicitly and conveniently. While there would be nothing wrong with using a single “operator” party for all users of my habit-tracking application, I know (and my users know) that we have the option to switch to a distributed model in the future. 

We can foresee a future where I’m recording my runs on two ledgers at once: one controlled by Strava, which automatically verifies that I ran, and one which is manually checked by my partner, for redundancy. By distributing and replicating this data across ledgers with the Canton protocol, I could make sure that even if my running tracker of choice goes down, my exercise is still recorded.

Daml doesn’t stop you from lying to yourself, because that doesn’t matter. It just stops you from deceiving anyone else. The ecosystem really shines when we recognize that almost all situations involve more than one party, and model not just the mechanics, but the network. Daml has also a new learn section where you can begin to code online:

Learn Daml online

Release of Daml SDK 1.3.0

Daml SDK 1.3.0 has been released 16th July 2020. You can install it using

daml install latest

This is a purely additive upgrade which comes with new features, functionality, and bug fixes. Upgrading should not require any intervention.


  • The Websocket query and fetch APIs are now stable.
  • The JSON API Server is now released as a standalone JAR file to GitHub Releases.
  • Daml Script and REPL now work in Static Time mode and can query parties.
  • Daml Studio exposes more details on how contracts were disclosed.
  • Trigger Service, a solution to host and manage Daml Triggers is now available in Early Access.

Known Issues

The Daml Studio VSCode extension is affected by a known and recently fixed bug in recent VSCode versions:

For some users this may lead to the Scenario View in VSCode not rendering correctly. If you are affected by this issue upgrading to VSCode 1.47 should resolve it.

What’s New

Websocket API is stable


The JSON API Server exposes several Websocket endpoints which allow clients to maintain a live view of contract data without polling. These endpoints have been available since before SDK 1.0 in early access, and are now considered stable.

Specific Changes

  • The API specification for the /v1/stream/query and /v1/stream/fetch endpoints are finalized and fully implemented. 

Impact and Migration

The final version of these endpoints is backwards compatible with SDK 1.0 in the sense that clients of these endpoints from SDK 1.0 work with SDK 1.3. Thus no action needs to be taken.

Standalone JSON API Server


The JSON API Server is a component intended to be run in production environments to supplement the lower level Ledger API with an easy-to-use queryable ledger state consumable by any HTTP 1.1 client, including web browsers. Despite this intended use case, the JSON API Server was only distributed as part of the SDK, which meant that the Daml SDK had to be installed on production servers in order to run the JSON API Server. Providing a stand-alone JAR distribution gives application operators a much leaner deployment option.

Specific Changes

Impact and Migration

This is purely additive to the distribution via the SDK so no action is needed. However, if you do run the JSON API Server in a test or production environment, this gives you a leaner and more portable way of doing so.

More functionality in Daml Script and REPL


Daml Script and REPL had some limitations in key test and production use cases. Firstly, neither exposed the Time Service, which made them hard to use in static time mode. Secondly, they only exposed functions to allocate parties, not to query existing parties, which required existing parties to be passed in via a file, or to be obtained using unsafe functions like partyFromText. By exposing the relevant functions of the Ledger API in Daml Script and REPL, Ledger Time can now be queried and set in Static Time mode, and existing parties can be queried.

In addition, it is now possible to use Daml Script and REPL with multiple JWTs, which in particular, means they can be used with multiple parties on DABL.

Specific Changes

  • Daml Script and REPL’s getTime now correctly handles time changes in static time mode and returns the current time by querying the time service rather than defaulting to the Unix epoch.
    This only works in static time mode and via gRPC. In wallclock mode, getTime continues to return the system time in UTC. When run against the JSON API in static time mode, it continues returning Unix epoch.
  • Add setTime to Daml Script and REPL which sets the ledger time via the Ledger API time service.
    This only works in static time mode and via gRPC.
  • Add listKnownParties and listKnownPartiesOn to query the corresponding ListKnownParties endpoint in the Party Management service.
  • The time mode for Daml REPL can now be specified using the--static-time and --wall-clock-time flags.
  • You can now use Daml Script with multiple auth tokens. This is particularly useful if you are working with the JSON API where you can only have one party per token or with an IAM that only provides single-party tokens. The tokens are specified in the participant configuration passed via --participant-config in a new access_token field. The existing --access-token-file flag is still supported if you want to use the same token for all connections. See for more details.

Impact and Migration

This functionality is purely additive so no action needs to be taken.

More Privacy information in Daml Studio


Daml Studio’s Scenario view allows developers to explore the transactions resulting from their Daml models in real time. One of the main uses of doing so is to verify that privacy is preserved as expected. Until now, the available views only gave information on who got to see a contract and through which transaction. SDK 1.3 adds information on the mechanism through which a party learned about a contract. This saves the developer the work of inferring this from the detailed transaction view.

Specific Changes

  • When displaying scenario results in table view in Daml Studio, there’s now a new checkbox “Show Detailed Disclosure” which shows indications _why_ a party knows about the existence of a contract:
    • S means the party is a signatory.
    • O means the party is an observer.
    • W means the party has witnessed the creation of the contract.
    • D means the party has learned about the contract via divulgence.

Impact and Migration

This functionality is purely additive so no action needs to be taken.

Early Access Trigger Service


Daml Triggers give developers the ability to write automation of Daml applications in the style of database triggers using the Daml language itself, aiding code reuse and allowing contract definitions and basic automation to be packaged and shipped together. These triggers need to be managed at runtime, which until now required developers to manage individual JVM processes, raising the bar to actually deploying Daml Triggers in production. The Trigger Service provides a way to manage Daml Triggers via a simple REST API.

The Trigger Service is currently in Alpha, meaning API changes are still likely, and it notably doesn’t support authentication yet.

Specific Changes

  • Added the daml trigger-service command to the SDK to start the Trigger Service. More information in the documentation.

Impact and Migration

This functionality is purely additive so no action needs to be taken. If you are already evaluating Triggers for your application, we highly recommend trying out the Trigger Service as it should ease their use considerably. We welcome your feedback.

Minor Improvements

  • The Java Binding’s Bot.wire and Bot.wireSimple now return a Disposable, which can be used to shut down the flows. You are encouraged to call .dispose() before terminating the client.
  • Added a CLI option for specifying the initial skew parameter for the time model. You can control the allowed difference between the Ledger Effective Time and the Record time using the --max-ledger-time-skew flag.
  • When run with persistence, the Sandbox used to crash if the database wasn’t running during startup. It now instead waits for the database to start up.
  • Additional CLI options to configure the write pipeline in Sandbox, allowing operators to determine at what point back pressure is applied. See daml sandbox --help for details.
  • Initialize the loading indicators in @daml/react of useQuery, useFetchByKey and their streaming variants with true. This removes a glitch where the loading indicator was false for a very brief moment when components using these hooks were mounted although no data had been loaded yet. Code using these hooks does not need to be adapted in response to this change.
  • The create-daml-app example can now be run against a HTTP JSON API port specified in the environment variable REACT_APP_HTTP_JSON_PORT
  • Improved error messages on unsuccessful key lookups.

Bug and Security fixes

  • damlc test --project-root now works with relative paths as well.
  • The Package Management Service’s ListKnownParties response’s PartyDetails now properly reflects where a party is non-local on distributed, multi-participant ledgers that expose parties to remote participants.
  • The application identifier in a command submission request is now checked against the authorization token. See
  • In scenarios, fetches and exercises of contract keys associated with contracts not visible to the submitter are now handled properly instead of showing a low-level error.
  • Some libraries in the Daml Studio VS Code Extension were updated to fix security issues. Daml Studio now requires VSCode 1.39 or newer.
  • Fix an issue in Daml Script where the port was ignored for non-empty paths in the url when running Daml Script over the JSON API.
  • Fix an issue in the Ledger API indexer that could have caused a crash in the presence of divulged contracts. Exclusively affects Daml ledger implementations where distributed participants each only see a portion of the ledger. The sandbox is not affected. See

Ledger Integration Kit

  • The Ledger API Test Tool --exclude and --include flags now match the full test name as a prefix, rather than just suite names. Test name is built by combining the suite name with a test identifier, so this change should be fully backwards compatible. Run with --list-all to list all tests (as opposed to just the test suites with --list).
  • LfValueTranslation.Cache now requires separate configuration of lfValueTranslationEventCache and lfValueTranslationContractCache
  • Upgrade auth0 jwks-rsa version to 0.11.0
  • KVUtils does not commit output keys whose value is identical to input anymore
  • The Ledger API Server + Sandbox now accepts a new time model if none is set. Previously, it would erroneously be rejected because the generation number submitted to was incorrectly set to 2 rather than 1. This would not affect most users of Sandbox or other kvutils-based ledgers, as if a configuration is set automatically on startup when creating a new ledger. This affects users who explicitly override the initial ledger configuration submit delay to something longer than a few milliseconds.
  • Add 8 new  timer metrics to track database performance when storing transactions. The overall time is measured by daml.index.db.store_ledger_entry.
    • Timer daml.index.db.store_ledger_entry.prepare_batches: measures the time for preparing batch insert/delete statements
    • Timer daml.index.db.store_ledger_entry.events_batch: measures the time for inserting events
    • Timer daml.index.db.store_ledger_entry.delete_contract_witnesses_batch:  measures the time for deleting contract witnesses
    • Timer daml.index.db.store_ledger_entry.delete_contracts_batch: measures the time for deleting contracts
    • Timer daml.index.db.store_ledger_entry.insert_contracts_batch: measures the time for inserting contracts
    • Timer daml.index.db.store_ledger_entry.insert_contract_witnesses_batch: measures the time for inserting contract witnesses
    • Timer daml.index.db.store_ledger_entry.insert_completion: measures the time for inserting the completion
    • Timer daml.index.db.store_ledger_entry.update_ledger_end: measures the time for updating the ledger end
  • Added 4 new timer metrics to track Daml execution performance The overall time is measured by
    • Timer daml.execution.lookup_active_contract_per_execution: measures the accumulated time spent for looking up active contracts per execution
    • Histogram daml.execution.lookup_active_contract_count_per_execution: measures the number of active contract lookups per execution
    • Timer daml.execution.lookup_contract_key_per_execution: measures the accumulated time spent for looking up contract keys per execution
    • Histogram daml.execution.lookup_contract_key_count_per_execution: measures the number of contract key lookups per execution

What’s Coming

Most of our current work is going into performance of the Daml integration components and improving production readiness of Daml Ledgers. In parallel, we are putting finishing touches on the feature work we’ve started on:

  • The Trigger Service is expected to reach feature completion and move into Beta stage in one of the next releases
  • Daml REPL is expected to become stable in one of the next releases
  • Daml will get a generic Map type as part of Daml-LF 1.9.

Announcing Daml SDK 1.3.0

Minor Delay

This RC was expected to be marked stable on Wednesday 15th of July. However we will need to delay the release by one day to Thursday 16th of July. During RC testing, a regression was discovered that caused the ledger offset in transaction stream requests to not be observed properly in some corner cases. See #6698 for more details. We are presently in the process of backporting this fix to the SDK 1.3 RC and running it through our testing processes.

What’s New in the Ecosystem

Daml SDK 1.3.0 brings many new features, functionality, and more stable APIs to Daml.

In parallel to the SDK release we are also conducting a Daml User survey. If you use Daml in any way please take 2-3 minutes to complete it. Your feedback is vital to consistent improvement of Daml.

We’ve also recently launched our forum at It is chock-full of questions, technical discussions, news updates, T-Shirts, and Capybaras. If you haven’t joined yet come check it out. If you join and complete the survey above we’ll give you a very exclusive Hero badge.

This last month we added several online and interactive learning opportunities to so if you haven’t tried it yet please do. This month Robin has added the Propose-and-Accept pattern, the Choices-and-Role pattern, and how Daml contract permissions correspond to UNIX’s rwx permissions on files.

Release Candidate for Daml SDK 1.3.0

The preliminary release notes for Daml SDK 1.3.0. can be found here. A community open door session will be held Monday 13th July at 2.30pm-3.00pm CET / 8:30-9am EST on Zoom. Participants must register prior to the meeting starting in order to join so please use this link to register in advance.


  • The Websocket query and fetch APIs are now stable.
  • The JSON API Server is now released as a standalone JAR file to GitHub Releases.
  • Daml Script and REPL now work in Static Time mode and can query parties.
  • Daml Studio exposes more details on how contracts were disclosed. Big thank you to Alex Mason for suggesting this very useful feature.
  • Trigger Service, a solution to host and manage Daml Triggers is now available in Early Access.

What’s Coming

Most of our current work is going into performance of the Daml integration components and improving production readiness of Daml Ledgers. In parallel, we are putting finishing touches on the feature work we’ve started on:

  • The Trigger Service is expected to reach feature completion and move into Beta stage in one of the next releases
  • Daml REPL is expected to become stable in one of the next releases
  • Daml will get a generic Map type as part of Daml-LF 1.9.
any distributed ledger into a Canton domain, leading to DAML ledger interoperability.

“Central Bank Digital Currencies” Technology Properties: We need Interoperability and More

In our previous articles on this series on “Central Bank Digital Currencies”, Darko has written about the benefits and difficulties of CBDC, and Richard introduced how such a currency would look like in Daml. In this article, I want to add a few thoughts on the technological properties a sound CBDC solution should provide.

There is one question: on what kind of platform should CBDC run? Naturally, it makes sense to use a sufficiently standardized approach, and here, Daml fits perfectly. First, Daml is a smart contract language that can run on many systems. Second, because we designed Daml to model complex rights and obligations, it naturally allows us to express concepts such as CBDC. From an anecdotal point, we created Daml by using a physical British pound note as a design inspiration. We picked the British pound specifically because the original legal promise of the Bank of England is still printed on every note (I promise to pay the bearer on demand the sum). Instead of revolutionizing how legal papers work, we thought it is smarter to work off existing legal concepts and revolutionize their form.

Moreover, the key to capturing the benefits of CBDC is to make the usage and the integration of CBDC in day to day business convenient and safe. But how should one do that as the integration of applications in distributed setups with different companies remains a very cumbersome process? Such systems need to provide atomic transactions, such that a change across multiple systems is either applied entirely or not at all. Otherwise, you quickly end up with consistency problems, such as canceling some invoices without having received the money.

Even more, especially when dealing with financial data, privacy and security are an absolute must. Few companies are willing to share their inventories and cash balance sheets openly with everybody. Because of that, most systems end up with expensive, tightly locked down systems we often refer to as silos. The result is that cross-system synchronization requires significant efforts to build and continuously reconcile. Therefore, just creating a CBDC system is not going to be enough. We need to ensure that it becomes attractive, convenient to use, and able to integrate safely.

What is required is what we refer to as application composability: the ability to individually extend the entire system with own workflows without requiring any additional collaboration or approval from a central operator. It should be possible for a supply chain, an insurance consortium, or an exchange to use CBDC money for atomic settlement without the central bank being aware in what context the money is being transferred. The Central Bank shouldn’t even be able to discern who all the other participating parties are except the ones directly involved in the money transfer. And other participants to the system shouldn’t even learn that there was some transaction.

Example: integrating CBDC money in a share trade workflow between parties A and B. Party A transfers its CBDC money to party B in exchange for some shares. The CBDC operator should not be aware of the share transfer, and the share operator should not be aware that the transfer was paid with CBDC.
Example: integrating CBDC money in a share trade workflow between parties A and B. Party A transfers its CBDC money to party B in exchange for some shares. The CBDC operator should not be aware of the share transfer, and the share operator should not be aware that the transfer was paid with CBDC.

How should that work? A central bank will not let other people deploy their applications within the central bank system, nor does a central bank ever intend to run unvetted software or software that is not directly tied to the service the central bank offers.

Here, we need to introduce another concept on which we based Daml. We built Daml to express rights & obligations digitally. Therefore, we can create a tree structure of a transaction and involve the parties only on a need to know basis. This concept is called sub-transaction privacy, where parties are only informed about the parts of the transaction they are entitled to see. To the best of our knowledge, this is a concept currently only available in Daml. Other contract languages treat smart contracts as blobs, which are either publicly known to all participants of the system or the transaction, but only Daml does a fine-grained decomposition.

Example of sub-transaction privacy: different parts of a trade workflow are visible only on a need to know basis. Each box is labeled with the parties who must be able to see the particular sub-transaction. For example, the CBDC transfer is not visible to the share registry, and share transfer is not visible to the central bank, while A and B see the entire transaction.
Example of sub-transaction privacy: different parts of a trade workflow are visible only on a need to know basis. Each box is labeled with the parties who must be able to see the particular sub-transaction. For example, the CBDC transfer is not visible to the share registry, and share transfer is not visible to the central bank, while A and B see the entire transaction.

However, Daml is just a language. One key question a CBDC solution needs to address is on which platforms a central bank should offer such a service. The above reasoning tells us that such a ledger should provide the composability and sub-transaction privacy that Daml expresses to make the offering attractive enough for widespread adoption.

Figuring out what is the best platform for CBDC is not easy given today’s existing solutions. Many choices lack horizontal scalability and, therefore, would add an upper limit to the number of theoretically possible transactions. Many ledgers have poor trust properties and can not deal with malicious participants. Thus, to provide security, systems are operated as strongly permissioned systems with operators being cautious about who to allow to participate. This approach locks down the system, making it available only to an exclusive club of strongly vetted and privileged users.

Even more, if a central bank selects a particular digital ledger, users of other digital ledgers will be excluded from the benefits, creating yet another silo that is hard to integrate. The only way around this is to build such a CBCD system on top of an interoperability protocol that allows us to bridge the gap between different technologies, and still provides all the above properties.

For us, one thing is clear: to be interoperable, systems need to speak a compatible language, where we think that Daml is just the right tool. And we need a protocol that allows us to connect all Daml ledgers. We at Digital Asset are working hard towards this vision, which we are trying to fulfill with our next generation of Daml ledger integrations, based on our new Daml ledger interoperability protocol. We named this protocol Canton (if you are interested in the origin of that name: the federation of independent Cantons forms the Helvetic federation, more formally known as “Switzerland”).

Canton ticks all the boxes established above. It is a Daml-based protocol for interoperability between existing ledger technologies (databases, permissioned or open blockchains, hardware enclaves, etc.). While Daml provides the capability to write your distributed application independent of the platform you want to run it on, Canton extends that capability such that you can run your Daml workflows across multiple platforms and make them interoperate, even when the original platform authors didn’t add this capability.

Domains are an abstraction of the concept of a distributed ledger, allowing us to turn any distributed ledger into a Canton domain, leading to Daml ledger interoperability.

If you are interested in learning about Canton, please look at a short explanation named “Elements of Canton”, and consult the Canton documentation. And you should definitely check out to convince yourself about how easy writing distributed applications becomes with Daml. Daml has also a new learn section where you can begin to code online:

Learn Daml online

How to Start a Startup: Looking ahead into the unknown

How to Start a Startup: Looking ahead into the unknown

Welcome to “How to Start a Startup,” a short series aimed at giving aspiring entrepreneurs a range of views about starting a new team, product, or initiative.

This week, our panel of founders will discuss what marks a company’s progress beyond the startup stage. 

 How to Start a Startup: Looking ahead into the unknown
How to Start a Startup: Looking ahead into the unknown

Michael Shaulov, CEO and Co-founder of Fireblocks:

I think that building a three-year roadmap is a bit meaningless. It’s better to have a strategy that lets you say, “In three years’ time, this is essentially what we’re going to build

Everything we’ve discussed to now is about planting the seed of a company. You’ve got an idea. You take it into it and find your co-founders and your first hires. You bring a little money in — and it feels like the moment when you really start to look ahead. What sort of things are you really thinking about? How do you set that seed up to grow?

The main thing is to make sure you keep executing into the right market. Even if you’ve gathered a lot of information for your seed round, markets remain dynamic — especially if they’re new. 

Most of my startups were in completely new markets that continuously evolve. Something you thought was a huge opportunity yesterday is no longer an opportunity today. But meanwhile, a new opportunity opens up next door, and you need to pivot to it. 

Also, many times you think that you’re executing in the market in a particular way, but conversations with customers reveal that they actually view you very differently from the way you view yourself. 

This sort of self awareness about markets and value propositions is the most important part of the journey. 

How do you go about setting roadmaps at that time? What’s the right level of detail?

A strategy document — a set of assumptions and analysis — should be like two or three pages. What do you want to be when you’re really big? What assumptions are you making about how the market is going to evolve or expand? What will support the growth to take you from point A to point Z? 

You do need to be honest with yourself. When you look at a lot of the blockchain projects from 2017 or 2018, that was essentially what was missing: the plan. They basically said, “We are going to take over the world. Gold is a $4 trillion opportunity. We are going to take this opportunity, and we’re going to be like the tokenized gold exchange.”

But is what we’re going to build really worth $4 trillion? That’s not true, right? 

At the end of the day, you have a long set of assumptions and a long list of things that need to happen — and some of them are within your control. And actually to be worth $4 trillion, some of these thigs are outside of your control. 

Those things need to be mapped. I think that building a three-year roadmap is a bit meaningless. It’s better to have a strategy that lets you say, “In three years’ time, this is essentially what we’re going to build. Now let’s execute for the next six months or the next nine months. 

It’s much less of a product roadmap and it’s more of like a company roadmap at that point, right?

Yeah. A lot of the time people think, “We will build it and they will come.” And that’s not the way it works. There’s a concrete set of actions in terms of market education, evangelism, displacement of incumbents, whatever — like, 10 activities you need to run. And another 10 activities that are completely out of your control, like the market conditions at every given moment.
Any one of your assumptions about how the market will evolve can fail, and you need to be very aware that those assumptions are failing.

Ben Milne, Founder and Chairman of Dwolla Inc:

When we started looking more inward directly at helping our clients grow and really fortifying our infrastructure for them and stopped focusing on the narrative with the media, it’s amazing how much noise just went away

So you’ve got this thing set up. When did you really start to look ahead? When did you start thinking in bigger increments than in the early days? Was there a particular moment when you started to think about a three-year roadmap or growing to 100 people?

Within the first couple of years, we were still building the most needed thing without a specific roadmap. Until after about 20 to 30 people, we didn’t have a single product person. We were just cranking on everything now, fast, get it out, solve it. 

“What’s the next thing? What’s the next thing? What’s the next thing?”

After we raised a round from Andreesen, we opened a San Francisco office with the intention of building an executive team that would really run the business and take it to the next stage. 

Once that VP team was built and things transitioned to the new product leader, we started to get more professional: “These are the initiatives, these are the themes, this is how it’s going to tie together.” And we started to build out real processes — not just around how we built technology, but how we communicated and mapped the work that we’re doing to what we hope to achieve. 

That measurement helped us see the divergence between work and objectives and understand that we needed to make a change in the company.

Do you wish you had done that earlier, or do you think that you timed that at roughly the right time?

It’s really hard to know. That period was extremely, extremely chaotic. In retrospect, I can speak to what I learned and be self-aware of how to use it in the future — but I don’t actually know whether or not that was the right call. We did the things we said we were going to do. When we realized what wasn’t working, we made a change. I think those are the right things to have happened. They may have and probably would have happened regardless of where we were … But it was a really crazy time.

Up to that time, I was outside of the company and out on the road evangelizing the company and what we were working on. I was doing a quarter million miles a year on Delta alone.

Can you point to like one thing in particular that you think has had the biggest impact in terms of the change in the company?

I can tell you, my quality of life is significantly better than it used to be.

What change made that possible?

I told people until X, Y and Z was fixed, I didn’t want to travel. I stopped accepting conference invites. I started setting an allotment that I would travel and forcing myself to pick what it was. I started sending other people — and if they didn’t want to go, we just wouldn’t do it. 

And one of the other things that we changed was we made a conscious decision to stop doing PR focused on the company where we were paying homage to ourselves.

We decided that if we were going to be an infrastructure company, we needed to find a way to focus first on our clients’ growth and success and focus our strategy on telling their stories. We don’t need to be running a PR program to do that. 

When we started looking more inward directly at helping our clients grow and really fortifying our infrastructure for them and stopped focusing on the narrative with the media, it’s amazing how much noise just went away. It became more about the work itself; the work continued to get better, and more customers continued to come in. Customers became clients, clients became partners. The DNA really started to evolve, and it became less about the story and more about, “Hey, who launched? Did they have any errors? Holy smokes, this is really great. That’s a new payment flow. They onboarded a bank. Hey, let’s put this stuff together.” And no one was concerned about telling anybody about it, just doing super high-quality work. 

That changed the company.

How did you go about instituting that across the company? 

When we restructured, the company got small really quick. That part of it was painful for everybody involved. Those were hard days. 

Sometimes we bite off really hard days for the actual benefit, right? So we’ve got to take our medicine. The business takes some medicine so that we can have the benefit of what’s on the other side. 

That means there are many roles that are not just not around. And by not filling them again or by changing the business’ structure and by changing goals, you can quickly change the course of a business. 

A lot of problems are functional. People, process and technology, change one, change all three. Sometimes you’re the thing that needs to change — but you can always have an impact if you want to, I think.

Arti Arora Raman, founder and CEO of Titaniam:

That’s the beautiful thing about having people involved along the way is they feel like they own your roadmap as well

You started with the thesis, “OK, I’m working towards this end goal, this bigger sale, but I’ve got these great internal sales, and I’m gonna build.” So how do you incorporate them into your roadmap? Do you keep the same destination in mind?

It’s basically a credibility journey. In our business, the same company uses the same product in many different places in the organization. Within a large financial institution, there will be 20 or 30 places where they could use my product. 

So early on, I was very focused on the highest-value item without thinking too much about what bar I am going to have to meet to get them to deploy my product more widely. Now I can get an internal use case: You can try me out, I’ll still be paid, and I can build up both my revenue as well as my credibility. Often you need a champion who’s going to use your product — and this gives us a way to get that without creating too much risk for the organization.

I initially ignored that milestone along the way, but I’m super bought into it now.

Do you have some sort of way to present these roadmaps externally? How do you think about that information when someone says, “Hey, what’s your roadmap?”

We don’t put months of quarters on it. But we just say, “We’re tackling this problem now. We need to tackle this other problem later.” 

And the type of data we present depends on who we’re talking to. The actual user of the system will be more interested in feature availability, and we keep it at that level. 

But if we’re talking to an executive, we talk about protecting sensitive data across the organization over the next five years. 

And they have a hand in shaping it. That’s the beautiful thing about having people involved along the way is they feel like they own your roadmap as well. So it’s never, like, “Hey, you told me two months ago, you’re going to do this, and here you are doing something else.” 

Throughout my career in consulting and security, the best relationships haven’t been us vs. them; you’re going to discover things together.

So what’s your strategy for your early users? Will you use a completely un-scalable but very hands-on of process and then rely on some word of mouth? Or will you start thinking more about marketing?

Ours is a product you can download, deploy and use, and we don’t have to be on site unless something’s horribly broken. So we spend most of our time with customers understanding their requirements before we write the code. But once we deploy the product, we don’t need to hold their hands too much. 

Our goal is to have self-service trials — some sort of self-service conversion, where we can say, okay, this trial is free for let’s say, 60 days, and after 60 days, we will automatically start invoicing or something like that. Where there was an expectation, there was some sort of terms and conditions, and we can go there. So that’s our goal. But there is another element to our product where it isn’t an online cloud service type of thing. They do download it, they do take it and we aren’t as able to monitor it as you would a cloud service. So we’re going have to balance that but there’s nothing about our product that would require us to hold their hand.

Daniel Chait, CEO and Co-founder of Greenhouse:

So what we did was, we would literally sit down and write the story that we were going to use to raise our next round

How do you start to think about that first year? Do you build a roadmap? Are you trying to get an MVP done?

There’s no such thing as raising one round of VC. If you decide to be on the VC path — which is a decision you shouldn’t make lightly — the only question was what are you going to raise and how are you going to raise your next round? We just raised money, now we have two years and we die. What are we going to do?

And more than anything, that really governed our planning and prioritization from that point on. So what we did was, we would literally sit down and write the story that we were going to use to raise our next round. “Hey, I just raised my seed round. What’s my Series A story?” 

Are you bragging about how many customers you have? Are you talking about your deep R&D innovation? What are you going to hang your hat on? Back it out quarter by quarter from now till the next year: What are you going to do?

That was really helpful in focusing our energy in those early years. At that point it was really clear: I can show this much growth, or launch this feature or do these things, that’s going into the deck. Everything else you do that isn’t leading up to that is secondary.

Did you ever rewrite the deck, or did it stay fairly stable?

Writing an actual fundraising doc means polishing it for a given context. The crux was why we were going to raise our next round. What’s it going to be driven off of? What kind of metrics do I really need? Is it 2x growth or 5x growth? Have we shown a shift into more of these types of customers than those types of customers? Did we launch this product? 

What proves that we’ve done what we said we were going to do last round and justifies the money for the next round?

The worst time to find out that your fundraiser isn’t going to work is after you tried and failed.

Did you start socializing that second-round thesis as soon as you closed the first round?

I would talk to other investors and say, “Look, I’m not raising money, but here’s what I’m thinking. If I’m going to raise a Series B today, I know it’ll be at $X million of ARR. I’ve got to be growing this much, I ought to see these numbers, what do you think?”

You get a little bit of a read on the market and then when you’re at that number or better, you can come back to them and say, “Hey, when we talked last year you said if we got to 18 this year that’d be a pretty good outcome. Well, we got to 20, what do you think? Is that good?” 

And they’ll go, “20? That’s amazing.”

By the time we were 50 or 70 people, I was explicitly talking about “the fundraising deck.” But those milestones were effectively the same things that we would tell the company at an all-hands: You show a deck and say how you’re progressing and identify goals for the year.

So it flowed through from this fundraising necessity down to monthly and quarterly sales and revenue and retention and growth and whatever milestones that the company all knew. Not that they were tied to a fundraising process explicitly, but that’s what drove it.

How did you get some of your first customers right after you did your funding? Your early ones are super-heavy hand-holding: You’re looking to your network and really bootstrapping it. But then as you start to fold into your business model, how did that inflection point look for you?

We did this course at General Assembly, which bootstrapped a lot of word of mouth. And that word of mouth was better than we ever could have guessed. 

At some point — I think after we raised a round — we were like, “Hey what if we even grew faster? How would we do that?” 

Because at that point we were just taking calls. We would really just show them the product and send them the price. We weren’t selling, we were just taking orders. At some point we sat down and were, like, “OK, what if we tried to get more customers?” And at that early part of your business, you literally don’t know what’s going to work.

And so we said we’re going to hire a marketing person and we’re going to hire an outbound, like you do inbound demand gen and we’re going to hire a person to build an SDR team and do outbound. And probably one of them is a waste, we just don’t know which one. So we’re going to hire two people and see what happens.

It turned out that they both started working roughly equally well, and so we just kept adding money to each in more or less equal proportion. If you look today at our business, it’s kind of still that. A third of our business is more or less inbound driven. They read our blog, they hear about us word of mouth, they do these other things in a directory or whatever and they come and say I want to demo on the website. And roughly a third of our business is outbound. We have SDRs who call and email and reach out to people and eventually they say, “Yeah, I’ll take a demo.”

And then a third of our business is hybrid: We track them on our website, and they engage in our community but they never click the demo button until our SDR calls them.

Marwan Forzley, Co-founder and CEO of Veem:

“And so I would say when you’re starting that early, don’t over-engineer anything. Be flexible. Take input, take a lot of it, adapt to being a child”

So, you’ve had this idea, you’ve built your MVP, you’ve got a couple of people working with you — co-founders or employees. And you finally get your first decent check through the door. What do you start to do at that point? How far in advance are you looking? How far in the future do you think about roadmaps? What do you think that first year after you’ve landed this capital?

I tell folks that startups are like people, they go through stages. You’ve got infants, toddlers, teenagers, adults, late adults, senior years, it’s important to understand where you are at. In the infant year you shouldn’t be doing stuff that really is for teenagers and vice versa. 

And so I would say when you’re starting that early, don’t over-engineer anything. Be flexible. Take input, take a lot of it, adapt to being a child. That’s all required to actually move from one station to another. As you move up into the later stages, then you’re starting to see processes and the ruggedness and scale. And you’ll find that now you’ll have less flexibility, but now you’re dealing with much larger numbers and customers, and it’s a whole different cycle. So plan for the phase you’re in, and evolve as you go from one phase to another. They’re all different phases.

What’s appropriate for one phase may not be appropriate for another, so you just have to change along the way. It’s very appropriate in the early days to hack things and move quickly and just dump whatever code you have, because you just want to get customers and make sure it’s working. But down the road, that becomes a problem, because then your whole code base becomes spaghetti.

How do you think about growth? Is there some inflection point you’re looking for where you really start to grow? Do you try to keep it artificially lean? And do you have any rule of thumb to guide those choices?

The way I usually do it is, if it is working, do more of it. If it works again, do it again. And keep increasing the amount of money and intensity of it, until it is no longer yielding. 

But you want to have rapid iterations, and as long as the customers are there and they’re taking it and the feedback is positive, you increase the pace. If the customer feedback is so-so, I would slow down and make sure you address the customer pain point before you fail. That one is a little bit of art, a little bit of science.

I say to everyone who’s doing a startup, you don’t have to solve everything at the same time. You can do it in phases. And everything ends up sorting itself out. Just be conscious of the stage you’re in, be flexible, and also expect ups and downs. I mean, it’s an adventure, so you’ve got to roll with the adventure.

Nimrod Lehavi, Co-founder and CEO of Simplex:

“Someone may fit right in what you need right now, but you’re not sure they’re able to grow to the next phase — and that will f*** you up so badly in a few months”

You’ve got a couple of people in, and you’ve got a little bit of funding, and now you’re starting to plan and think ahead for the first year, year-and-a-half. Do you sit down and build some KPIs? Do you build a roadmap? Do you build a vision deck? What are some of the first things that you’re doing as you start to look and plan for the future?

At the beginning, we didn’t plan any KPIs. But again, we had a very clear external target: Find an acquiring bank. For us, finding a bank account was a huge barrier, and a company that would process our transactions. It was crazy difficult, treading the thin line between crypto and fiat. Now it’s less challenging, but it was virtually impossible then.

So because you’re playing in finance, there were some hard requirements for you to get across the line. How did you even prioritize the work that you had to do? Or did you really just tackle problems as they came along?

Kind of, because there were so many do-or-die tasks that are logical AND  gating your application. If you miss one, you’re f***d, basically. 

What happens when you think you’ve got a plan – you think you’ve got some bedrock underneath you – then you don’t. Things just change suddenly, and you have to respond to it.

In the worst possible manner, and with the worst possible timing. It’s insane, especially in fintech and working with banking infrastructure. If your bank suddenly decides to shut down your account, think what would happen if you had customers’ funds running through it on a daily basis. That’s what we do. We do processing, so money flows in, money flows out. No bank account? Gulp!

I know you had a small board, mostly of founders, but when someone said to you, “What’s the plan for the year or the next three months?” or anything like that, would you just walk them through it on a whiteboard?

Again, we had a pretty much clear roadmap on the sense of constraints: what must happen, otherwise we can’t grow, otherwise we can’t sign that, otherwise we’re not profiting per transaction.

I think it’s more difficult when you have a standalone product, because then you have to explain why you are doing this instead of that. With us, it’s pretty clear: This one’s more profitable, so we’re doing this one.

How did you get your first exchange to allow you on there? 

The first company working with us was actually not an exchange. It was Genesis Mining. Again, it’s a small industry, and I was already at many conferences in 2013. Before we even started the company, I started going to conferences, so I knew quite a lot of people. 

And I spoke with them as if we were launching it tomorrow! In August 2013 in London, I spoke with a few people and told them we’re going to launch it soon. We launched December 14. So I was promising way ahead.

How did you make the transition from the very, very early users and the difficulties there, into a much more scalable process? Did you keep a small group of early users until you nailed your process, and then you scaled it? Or did you just kind of fight through everything as you went and ironed it out along the way?

More of the second. I think that’s the way to go. You understand, you get feedback. If you’re able to fix it fast enough, then it’s much more fast than trying to imagine the entire product in your head and then launching it, only to find out that nobody wants it. I’d rather do like early-phase Facebook: move fast and break things. I think it’s smarter for startups. 

Again, it’s harder when you’re a financial institution, because if you break things, you could end up in Guantanamo Bay.

How do you weigh that when you’re thinking about dealing with someone’s money? I mean, the negative feedback is going to be significantly more angry — but then also the opportunities to make a big mistake are larger, as well.

So on security, we never compromised on anything. That’s the main thing, I think. Regulation, when you’re doing things in small numbers, nobody gives a shit. Like I can do… again, not illegal things, but kind of a gray area, things in the tens of thousands of dollars, and nobody would care. If it’s getting to the millions of dollars, I’m going to get arrested at JFK. We’re never doing that.

We might do some tests. My favorite expression with our US attorneys at the time was, “OK, put this risk on the scale between Guantanamo Bay and a slap on the wrist. Like, we’re closer to what?” And if it was a slap on the wrist, then we took the risk.

What’s ahead

Each post in the series comprises a short set of questions about a specific topic. Our participants will offer their own views and we encourage you to add your voice to the discussion.

Do you have your own questions for Digital Asset’s panel of entrepreneurs? Please write to us at

Digital Asset can remove engineering roadblocks for entrepreneurs. Contact us to learn how to build and deploy software with Daml, an open-source toolchain for building applications, and DABL, a scalable cloud environment for launching SaaS applications with a serverless experience.

Daml has also a new learn section where you can begin to code online or try DABL here:

Learn Daml online

Try DABL for free