Release of Daml SDK 0.13.50

Daml Compiler

  • damlc test now initializes the packagedb automatically which means
    that it will work on projects that declare custom dependencies in
    daml.yaml without having to call damlc init first.
  • Choices marked explicitly as preconsuming are now equivalent to a
    nonconsuming choice that calls archive self at the beginning.

Daml Integration Kit

  • The simplified kvutils API now uses com.digitalasset.resources to
    manage acquiring and releasing resources instead of Closeable.

Daml Standard Library

  • Add CanAbort instance for Either Text.

Daml Studio

  • Support all build-options supported by daml build.

Sandbox

  • On initialization error, report the error correctly and exit with a
    status code of 1. Previously, the program would hang indefinitely.
    (This regression was introduced in v0.13.41.)
  • Upgrade the Flyway database migrations library from v5 to v6.

Daml Triggers – Experimental

  • Daml triggers can now be tested in scenarios. Specifically, a
    trigger’s rule can be executed in a scenario and assertions
    performed on the emitted commands.

Developer Experience (DX) 101: How to evaluate it via diary study

Developer experience (DX) is the overall user experience (UX) developers have while engaged with a dev-related product, e.g., programming language, SDK, API, framework, library, docs, code examples etc. Having a great DX means that your users are more productive, content, and ultimately they are advocates of your product: all the reasons to make sure that you have an awesome DX! 

Developer experience is an important topic given that there are 370 languages used on GitHub [12] alone, 22 000 APIs registered at ProgrammableWeb [13], and somewhere between 21 000 000 [14] and 23 000 000 developers in the world [15]. This is also reflected in Google trends where we can see that it receives a fair amount of attention (1:4) when compared to its root discipline – user experience.

Screenshot 2020-01-15 at 10.35.19

Unfortunately, developer experience is an underserved topic as there’s only a handful of articles that provide general guidelines and opinions of what makes a good DX [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] → this is almost the entire list that can be found out there. Also, there are no practical guides that write about how to evaluate DX and what steps need to be taken in order to do it. 

Because we want to bring a best-in-class DX to distributed app development we have started systematic assessment of Daml SDK’s developer experience. Below you can find our step-by-step guide on how to conduct a diary study aimed at evaluating developers’ first contact with a dev-related product (Daml SDK).

Evaluating developer experience via a diary study

A diary study [16] is a great method for evaluating developers experience with a dev-related product as:

  • It spans more than a single coding session 
  • It allows developers to move between different learning material 
  • It allows them to fit the work in their schedule
  • It is the closest thing to learning coding as in the real world

As with any UX study there are three phases to making it happen: planning and preparing, executing, and analysing.

Preparation and planning

We start with a doc summarising all the instructions and tasks that study participants are asked to perform. This doc (e.g., Google Doc) is then shared with the participants and it includes 

First page summary of the study timeline, high-level overview of the tasks, instructions whom and how to contact regarding study questions. Having an open and clearly defined communication channel during the study is crucial as there are always things that need to be cleared up once the study starts.

Detailed task(s) description that needs to be clearly specified, including what is in and what is out of scope. For example, if the task is to develop a task tracking application using your SDK/ API/ framework this needs to be broken down into items that need to be covered and the ones that don’t:

  • Using Daml SDK develop a task tracking application
  • Each task has an issuer and an assignee.
  • The issuer is the person who creates the task and suggests it to an assignee.
  • The assignee can 1) accept or 2) reject the task.
  • If the task is accepted it can then be marked by the assignee as completed. Only accepted tasks can be marked as completed
  • If the task is rejected nothing else can be done with it
  • Storing tasks permanently in a DB or any other storage is NOT part of this assignment

As part of the task(s) provide code snippets necessary for the task(s) that could be potential time-waste if devs needed to write/google them on their own. For the above example we could provide snippets for logging in users as they could do it with the SDK/ API/ framework they are familiar with. 

In the process of writing the task descriptions you will have to optimize the tasks to accommodate the study duration. Study duration is a combination of your budget, team size, and the task(s). We were able to get great results with a 5 days study asking the participants to invest one hour daily. It also makes it easier to provide help in between and therefore avoids users getting stuck.

As the last part of the document you’ll prepare daily comment boxes/areas where participants can write what was good/bad while engaged with the task(s).  Make sure to write enough pointers for what they should be writing about, e.g., for what was good

  • IDE support, documentation, language syntax, clear error messages, anything else that comes to mind
  • Why was it good from your perspective?
  • Please attach screenshots when applicable 

and similarly for what went bad.

To wrap up the study prepare draft version of a post study survey or interview It’s going to be a draft version as you will adapt it with insights coming out of participant’s diaries. For systematic assessment of first contact include standard usability measures, e.g., single ease question [18] or number of errors.  

Execution

For the execution phase it’s crucial to have your participants keep a detailed description of your target developer profile in mind, minimum being age, years of experience programming (professionally), and desired programming language/ SDK/ API/ framework skills. The number of participants can vary – you can get good results with even 5 participants. For easy recruiting we chose to use TestingTime.

A big part of executing the study is reminding the participants about the tasks. You can turn this around: it’s more about seeing if they’re doing OK, if the instructions are clear, or if there’s anything else you could help them with. 

During the study make sure to read the diaries as they are filled in. This is really important as sometimes you need to ask clarifying questions or ask the participants to upload screenshots to make the issue clearer. Their entries will also inform your survey/interview (e.g., why haven’t you engaged on Slack?)

Data analysis

Once you have closed the study and collected the data you can start analysing it. As mentioned before there are some standard usability measures that could help (SEQ, number of errors) assess what needs to be tackled. I personally prefer to do affinity diagram analysis[17], grouping and regrouping items number of times (until it makes sense).

Now traditionally affinity diagram analysis is done with sticky notes, in a room, and a team that goes over the data. I’ve learned over the years to work with Excel/Google Sheets as it suits today’s fast paced working environment: people are busy and have limited time (even though it’s a lot more fun to do it the traditional way). 

As diaries and survey entries are already in digital form you can copy them in separate sheets within the same doc. For the diary entries it may be helpful to have participant ID as rows and days as columns and freeze those for easier navigation.

Screenshot 2020-01-21 at 10.45.00Start the analysis by copying significant items to a new sheet. I typically have one row for the overarching theme and then another one for supporting items. For each new iteration just duplicate the sheet and continue the analysis.

Once you feel that data is in a good enough state, share it with the team/ stakeholders for their input. For me, a good enough state is when you have regrouped 3-5 times and you have enough quotes to support the categories. The last part is crucial as team members/ stakeholders benefit from reading the quotes and empathizing with what developers are going through.

As a very last step prioritise discovered issues and ideate how to fix them. And voila! That’s it. Oh, one more big and important thing: make sure to execute on fixes that you and the team deem important. You’re doing it for the devs so it has to go live 😉 

Thnx to Anthony, Bernhard, and Moritz for their review and feedback on the blog post!


TL;DR: Diary studies are a great way of evaluating developer experience (DX) and developers’ first contact with your dev-related product as it’s the closest thing to learning coding as in the real world. When planning the study make sure that you have a doc that participants can refer back to with 1) study instructions and 2) detailed tasks. Provide code snippets for parts that are not important for the task and are a time-waste. When executing the study pay attention to diary entries as you might need to ask for clarifications/screenshots. Also these entries will inform your post-study survey/interview. Analyse the data with affinity diagram analysis using Excel/Google sheets.  

References

[1] https://hackernoon.com/the-best-practices-for-a-great-developer-experience-dx-9036834382b0

[2] https://medium.com/@albertcavalcante/what-is-dx-developer-experience-401a0e44a9d9

[3] https://medium.com/apis-and-digital-transformation/great-developer-experiences-and-the-people-who-make-them-b97b544caba9

[4] https://dev.to/stereobooster/developer-experience-how-i-missed-it-before-47go

[5] https://uxdesign.cc/contributing-great-developer-experience-designer-e1f497b0fb4

[6] https://www.hellosign.com/blog/the-rise-of-developer-experience

[7] https://www.aavista.com/how-to-create-a-good-developer-experience-for-your-api/

[8] https://www.moesif.com/blog/api-guide/api-developer-experience/

[9] https://blog.apimatic.io/

[10] https://blog.apimatic.io/what-exactly-is-developer-experience-1646b813df14

[11] https://uxmastery.com/resources/techniques/

[12] https://octoverse.github.com/#top-languages

[13] https://www.programmableweb.com/apis/directory

[14] https://insights.stackoverflow.com/survey/2019#overview

[15] https://www.daxx.com/blog/development-trends/number-software-developers-world

[16] https://www.testingtime.com/en/blog/diary-studies-a-practical-guide/

[17] https://www.interaction-design.org/literature/article/affinity-diagrams-learn-how-to-cluster-and-bundle-ideas-and-facts

[18] https://measuringu.com/seq10/

Release of Daml SDK 0.13.46

Sandbox

  • The sandbox uses a new payload format for authentication tokens (JWTs). The old format is
    deprecated, but still works.
  • Metrics are now namespaced by "daml" and their names have been standardized to snake_case.

Daml Studio

  • Scenarios with unserializable result types no longer crash the scenario service.
  • Fix a bug introduced in 0.13.43 that caused Daml studio to stop responding after code completions were requested.

Daml-LF

  • Prohibit contract IDs in contract keys completely. Previously, creating keys containing absolute (but not relative) contract IDs was allowed, but lookupByKey on such a key would crash.

Daml Compiler

  • Added a --drop-orphan-instances flag in daml damlc docs.
  • The modification times in a DAR are now fixed to a given value which makes the output of daml build deterministic in single-threaded mode (which is the default).

JSON API

  • The HTTP JSON API now uses the same payload format for authentication tokens as the sandbox. The
    old format is deprecated, but still works.

JSON API – Experimental

  • Support Exercise by Key. See issue #4099 .
  • Response format in searchForever changed to be more like exercise. See issue #4072 .
  • In ‘search’ endpoint arguments, %templates is now templateIds.
    Additionally, all contract query fields must occur under ‘query’. See issue #3450 .
  • WebSocket contract search at /contracts/searchForever. See issue #3936 .

Indexer

  • Potentially fix a bug when recovering from failure.

Daml Standard Library

  • The TemplateChoice, and TemplateKey typeclasses have been split up into many small typeclasses to improve forward compatibility of Daml models. TemplateChoice and TemplateKey constraints can still be used as before.

Ledger API Server

  • Publish the resource management code as a library under com.digitalasset:resources.

Ledger API Authorization

  • Support EC256 and EC512 algorithms for JWT

Release of Daml SDK 0.13.43

Daml Compiler

  • The build-options field from daml.yaml is now also respected when --project-root is used.

Daml SDK

  • Docker images for this release and releases in the future are built using the Dockerfile of the corresponding git tag and are therefore stable. Previously, they were updated whenever the Dockerfile changed.

Ledger API Server

  • BREAKING CHANGE lookupByKey now requires the submitter to be a stakeholder on the referenced contract. See issue #2311 and issue #3543.
  • Metrics: Update dropwizard to version 4.1.2.
  • Authorization: Support elliptic curve algorithm for JWT verification.

Sandbox

  • Allow submitMustFail in scenarios used for sandbox initialization.
  • Loosen database schema to allow persistence of transaction ledger entries where no submitter info is present (typically when the submitter is hosted by another participant node).
  • Daml trace logs (trace, traceRaw, traceId) are now logged via the regular logging system (slf4j+logback) at interpretation time via the logger daml.tracelog at DEBUG level.
  • Fix bug that can cause the transaction stream to not terminate. See issue #3984.

Daml Triggers – Experimental

  • You can now configure a heartbeat message to be sent at a regular time interval.

JSON API – Experimental

  • The /contracts/search endpoint reports unresolved template IDs as warnings. See issue #3771.
  • Use JSON string to encode template IDs. Use colon (:) to separate parts of the ID. The request format, with optional package ID:
    • "<module>:<entity>"
    – "<package ID>:<module>:<entity>" The response always contains fully qualified template ID in the format:- "<package ID>:<module>:<entity>" See issue #3647.
  • Align contract table with domain.ActiveContract class. The database schema has changed, if using --query-store-jdbc-config, you must rebuild the database by adding ,createSchema=true. See issue #3754.
  • The witnessParties field is removed from all JSON responses.

Newsletter

Join the Daml mailinglist to get occasional updates on announcements and new releases

Beyond Tokenization

Many industries are on the brink of the next technological revolution in record keeping. Ten years after Bitcoin made its splash, we’re seeing many inspired by some of the benefits promised by the technology outside of the money use case:

  • Reduction in risk due to verifiable, immutable records
  • Greater efficiency thanks to shared real-time data
  • Simplification of processes enabled by transactions across trust boundaries

However, many are still unsure whether these benefits can be achieved, in part due to the modest speed of adoption of these technologies over the first ten years.

In this article I will lay out why we at Digital Asset believe that distributed ledgers and smart contracts are the next big step in the digitization of assets and processes. I will also offer an explanation of why adoption is taking so long to gain momentum, some of the common misconceptions around digitization of assets, and how we are addressing those obstacles within the Daml ecosystem.

The Evolution of Record Keeping

“Record Keeping“ may not have the fashionable ring of “Blockchain”, “Smart Contracts”, or “Tokenization”, but it’s what all of this technology is about. Records of contracts, laws, and ownership, as well as the processes to change these records are foundational to society and industry. Consequently, the technology used for record keeping is constantly evolving, with each new innovation trying to solve problems present in the previous. 

Physical ownership records, be it in the form of paper-based bearer instruments like promissory notes or cash, or centralized records like land registries, have been around for millenia. They solve the foundational problem of having an agreed and legally enforceable way to ascertain who owns what.

Unfortunately these paper-based records and processes are slow, expensive, and error-prone. Even before the advent of computers, special machines were developed to aid with the task of keeping crucial records accurate [1]. The advent of computers in the second half of the 20th century offered an enormous boost to the efficiency of keeping and updating records,  leading to the broad digitalization of most asset classes including cash, stocks, bonds, treasury notes, and all manner of certificates. 

Efficient distribution and communication followed soon after, first through point-to-point digital communication channels, and later the internet.

The resulting increase in the efficiency of data processing and communication, as well as the scale of the distribution of data, has brought the original problem of being unable to unequivocally ascertain ownership of an asset at a given point in time back to the forefront. Each record is now copied manyfold across a loose network of computers and record changes trickle through the network in multiple steps. Complex and expensive reconciliation is needed to keep all the copies in sync, and errors are commonplace leading to even more expense on dispute resolution and exception handling.

The computer revolution didn’t change the role of institutions in record keeping. Most records, be it bank accounts, insurance policies, health records, or land ownership, have a “master copy” controlled by a single, or small number of entities. These controlling parties are required as a matter of business, and often by law, to keep this data, which represents their clients’ legal title to those digital assets, accurate and safe.

Properly implemented distributed ledgers address these problems by granting more parties access to the master copy and codifying the rights of the trusted entities into verifiable actions. This removes the need for reconciliation and reduces the total trust required in the controlling institutions.

So why have we not not seen a rapid, wide-spread adoption of tokenization for a broad class of assets?

I believe there are several major reasons for this, but also that all of them can be overcome with a shift in mindset and recent technological development. The time is ripe for a change.

Tokenization misses the point

Tokens are digital bearer instruments. A token should represent a redeemable claim to a security and derives its value from that claim. Holding the token is equivalent to owning a claim to the asset and ownership of the claim should be freely transferable. In these respects, tokens mimic cash:

The said [Federal Reserve] notes shall be obligations of the United States […] They shall be redeemed in lawful money on demand at the Treasury Department of the United States, in the city of Washington, District of Columbia, or at any Federal Reserve bank.

Many have offered multisignature as a way to add a layer of legal control on top of a token (ourselves included!) and this approach does have some appeal as a final step in the process of moving and controlling money. However legal frameworks and processes involve far more coordination and authorization than the simple final exercising of private keys. This makes multisig a worthwhile step in some processes but it falls short of fully representing the legal requirements most businesses need to operate under. Specifically multisig may work for bearer assets in some cases but does not work for ownership records.

In reality, most assets are not bearer instruments. Ownership records are kept and controlled by regulated entities recognised by law. Ownership kept on that regulated ledger is legally enforceable, and restrictions to ownership and transfers often apply. Assets derive their value from the legally enforceable set of rights they bestow upon their owner.

Real estate is a good example of an asset and a bad use-case for tokens.

Real estate is not a bearer instrument. If I lose the key to my house, I don’t lose ownership. Similarly, if I turned up to someone else’s property and demanded forfeiture while waving a paper land title, I would likely be arrested for forgery or theft.

Real estate ownership is controlled by land registries and only the land registry’s ledger is legally enforceable. In many places, there are restrictions on who may own real estate and to what degree it is fungible and transferable.

So if I lost the private key to my real estate token, I would similarly not lose ownership of my house. It seems inconceivable that a judge would evict a local homeowner in favor of a pseudo-anonymous, foreign token holder. It would be especially dubious if that token holder gained control through means outside the court’s jurisdiction (nefarious or otherwise). This begs the question – if your goal is to ‘maximally decentralize’ this type of asset, what protections did your token holder receive for the cost of implementation complexity?

The purpose of buying a plot isn’t to get an entry in the land registry, nor does land ownership bestow full control of a cone from the centre of the earth into outer space. It gives the owner a very specific set of rights: the right to build a house, the right to keep strangers off the land, the right to split the land and sell a part on. The former two of which no token will ever do for you.

Indeed, for many other types of “ownership”, like securities or media, the term is used to convey an idea of a set of rights, which are actually traded individually: lease-holds or mineral rights on land, or dividends or voting rights on securities. In some areas, this is so intricate that “ownership” has become a virtually meaningless term. Digital Rights Management is one such area and shows what “ownership” of digital assets is really all about: Rights.

To reap the benefits of open or shared records, we can’t just record ownership as opaque Tokens. Instead, we must represent the rights that make up ownership in our digital systems, be they centralized or decentralized. This is where smart contracts come in. They aim to encode the specific rights that ownership of an asset grants to the owner, and encode the rules and processes around those rights.

Rigid Ledgers

The rules set forth by smart contracts do not replace law and regulation. Indeed, real world law, regulation, and contracts are often open to interpretation. Their practical meaning is determined by practice and legal test cases.

Having actions take place with absolute inevitability and rules enforced by a disinterested network of nodes does not reflect the subtlety of the real world. Such rigidity also leads to enormous risk as unintended flaws in contracts are set in stone. This isn’t hypothetical risk, either. The famous DAO exploit was caused by a subtle bug in a smart contract and could only be fixed by a majority of all Ethereum users updating their nodes to agree to fork the blockchain [2] and create an alternative reality.

Such rigidity is not desirable for most business use-cases. Instead, most smart contracts must be a reflection or encoding of the contracts, laws, and regulations governing the real world. Parties bound by a contract must be able to change, supersede, or tear it up by mutual agreement, just like they would with a real contract. The rules of a contract must be enforced by parties with a stake and interest in the contract, with clear avenues for dispute resolution. While those outside the contract should have no say in it.

Poor Privacy

Asset issuers and investors have distinct privacy requirements. Investors would like their holdings and transactions to be kept secret as far as possible. Issuers have to know who holds their assets and are responsible for keeping personal information safe.

Most blockchains systems are entirely public and fully replicated between all participants. The main privacy safeguard on most chains is the use of pseudonymous addresses to create a relatively weak form of anonymization.

For investors, pseudonymity is often an ineffective form of privacy and requires a great deal of care. Since all users can see all transactions, it is possible to infer a lot [3]. The risk of confidentiality being broken and information about trades and transactions becoming public is unacceptable for many businesses.

For issuers, anonymity makes it impossible to fulfill their legal requirements on KYC, anti money laundering, or data protection. Indeed, it is not entirely clear whether such systems can comply with data-protection regulation like GDPR, which governs many types of records that could well be kept on distributed ledgers [4].

In other words, the (lack of) privacy afforded by most current platforms is inappropriate for all parties and asset types other than cryptocurrencies which make this tradeoff for the sake of other network properties. But even there, privacy is desired by many, yet hard to achieve.

Blockchain does not replace Trust

Like record keeping, trust around ownership has evolved greatly throughout history. Taking banking as an example, it all started very simply. The trust relationship was fully centralized, with a client having to trust their bank entirely. Banking clients were fully exposed to their bank burning down, being robbed, or embezzling assets. Such a single point of failure is easy to understand, but highly risky.

Such risk is harmful to business and consumers alike, so today’s financial system has developed into a complex system of laws, regulations, and institutions that try to safeguard our assets. The perceived safety provided by the system is what helps our markets work smoothly and efficiently, benefiting society as a whole. However, they add complexity and opacity to a degree where many people no longer fully understand who they implicitly or explicitly trust. Some learnt this the hard way during the 2008/2009 financial crisis.

Public blockchain’s innovation here is to simplify by going to the opposite end of the trust spectrum. Trust is now transferred to a democratic majority – the 51%. It’s beautifully effective for applications like cryptocurrencies, where all rights concern only the record itself. A cryptocurrency user can have a coin record in their name, and they can transfer it.

Most rights are not like that. The right to build a house on a piece of land does not concern the record of that right. A real world party is needed to guarantee and enforce that right. Typically the guarantors of rights are the same institutions that are entrusted with the record keeping for those rights. Democratizing the record keeping for such records achieves little, which is why there are few truly decentralized applications other than cryptocurrencies.

Smart contracts do not replace trust, they make it explicit and transparent. A good system encodes the trust assumptions between all participants of a contract in such a way that all parties understand their rights, obligations, and exposures. There must be a clear understanding of who is ultimately responsible for maintaining the link between the ledger and the real world.

The risk of Lock-in

Developing distributed applications is largely akin to developing mainframe applications several decades ago. There is little abstraction between the surface language and the system internals. Applications are built for one infrastructure and tied to it forever.

Yet Gartner predicts that 90% of today’s platforms will need replacing by 2021 [5]. How could any serious business invest significant money to replace a critical system in the knowledge that they will be locked into a technology and an infrastructure vendor that may be superseded in two years?

The ledgers we use need to catch up with traditional application development stacks, which offer clean abstractions between different layers. Only by making applications portable between ledger infrastructures can they be future-proofed and development de-risked to a degree where it becomes viable for wide-spread adoption.

Daml: Smart Contracts for Intelligent Risk-Taking

Daml is Digital Asset’s answer to all of these issues. It is a high-level smart contract language designed to bring unprecedented productivity to the development of distributed applications for enterprise and beyond. 

A Concise Language for Records and Rights

Daml has a rich data definition language to describe records, as well as the rights parties have according to these records. For example, the below representation of ownership of a piece of land gives the owner the right to request sub-division of the plot.

Clear Data Control gives Flexibility

The above contract is between a land owner and their commune. Should the contract be erroneous, those parties should be able to amend it. Daml has a clean concept of data control through signatories. Owner and commune can jointly make changes to an ownership record.

Using this add-on contract, the commune and land owner may decide to correct a record by mutual agreement, in a completely compositional fashion. “Upgrading” or “correcting” of contracts does not need to be planned for. A clear concept of data ownership makes it work out of the box. This is in stark contrast to public blockchains, which should contain permissionless smart contracts specifically designed to be minimally, and ideally not at all modifiable.

Privacy, not anonymity

The Daml Ledger model specifies precisely who has the right to see data and follows a small set of principles to derive visibility rules:

  1. Whenever a party sees an action it sees all consequences
  2. A party sees any action on contracts on which they have a stake
  3. A party sees nothing else, unless explicitly made an observer

These rules maximize privacy while ensuring that any party ends up with a fully valid view of the ledger, and everyone is guaranteed to be informed of the state of contracts in which they hold a stake.

The Daml Quickstart guide gives a powerful example of subtransaction privacy using a simple trade model. In Daml it is possible to atomically swap assets guaranteed by two institutions without either institution learning anything more than that the asset they back has been transferred.

Explicit Trust

The right to trigger the land subdivision process shown above cannot be enforced on-ledger. The commune or higher authorities have to be trusted that they will honor the on-ledger agreement and perform their roles in the process to decide whether the request is accepted or rejected.

Daml makes this very explicit. No party can ever become a signatory on a contract without their consent. Having the commune as a signatory on LandRecord means they may be an obligable party under the rights encoded in the contract, and have agreed to the terms. The signatories of a contract are those that have to be trusted to uphold the rights described and maintain the on- off-ledger link.

Infrastructure Abstraction

Daml and its API are to distributed applications as SQL and ODBC are to centralized ones. The clean interface between data and logic specification, the underlying storage infrastructure, and the client application consuming the data makes applications portable without re-engineering.

This abstraction makes Daml easier to learn and faster to develop than other smart contract languages and tech stacks. The developer need not concern themselves with the detailed mechanics of the infrastructure they are planning to deploy to and system-level concerns are kept out of the language completely. By learning one language and one API, app developers become able to develop applications for local deployment, clouds, and distributed ledgers.

It also de-risks and accelerates the development of distributed applications. Applications can be developed without worrying about which infrastructure is the right one to pick, both for current and future needs. They can be developed and prototyped on entirely different infrastructure than the production deployment. Daml supports a wide range of infrastructures, from local database-backed solutions, to cloud-hosted ledgers, and all the way to fully public blockchains true to the Daml Ledger Model.

[1] https://de.wikipedia.org/wiki/Grundbuch#/media/Datei:BadFredeburg-Gerichtsmuseum4-Asio.JPG

[2] https://en.wikipedia.org/wiki/The_DAO_(organization)

[3] https://cryptolux.org/images/d/d9/Zcash.pdf

[4] http://www.europarl.europa.eu/RegData/etudes/STUD/2019/634445/EPRS_STU(2019)634445_EN.pdf

[5] https://www.gartner.com/en/newsroom/press-releases/2019-07-03-gartner-predicts-90–of-current-enterprise-blockchain

Release of Daml SDK 0.13.42

JSON API – Experimental

  • Rename argument in active contract to payload. See #3826.
  • Change variant JSON encoding. The new format is { tag: data-constructor, value: argument }. For example, if we have: data Foo = Bar Int | Baz, these are all valid JSON encodings for values of type Foo:
    • {"tag": "Bar", "value": 42}
    • {"tag": "Baz", "value": {}} See #3622
  • Fix /contracts/lookup find by contract key.
  • Fix /command/exercise to support any LF type as a choice argument. See #3390

Daml Compiler

  • Move more types from daml-stdlib to standalone LF packages. The module names for the types have also changed slightly. This only matters over the Ledger API when you specify the module name explicitly. In Daml you should continue to use the existing module names.
    • The types from DA.Semigroup are now in a separate package under DA.Semigroup.Types
    • The types from DA.Monoid are now in a separate package under DA.Monoid.Types
    • The types from DA.Time are now in a separate package under DA.Time.Types
    • The types from DA.Validation` are now in a separate package under DA.Validation.Types“
    • The types from DA.Logic are now in a separate package under DA.Logic.Types
    • The types from DA.Date are now in a separate package under DA.Date.Types.
    • The Down type from DA.Internal.Prelude is now in a separate package under DA.Internal.Down.

Daml SDK

  • daml damlc docs now accepts a --exclude-instances option to exclude unwanted instance docs by class name.

Daml-ON-X-SERVER

  • Made ledger api server to bind to localhost by default instead to the public interface for security reasons.

Daml Assistant

  • Bash completions for the Daml assistant are now available via daml install. These will be installed automatically on Linux and Mac. If you use bash and have bash completions installed, these bash completions let you use the tab key to autocomplete many Daml Assistant commands, such as daml install and daml version.
  • Zsh completions for the Daml Assistant are now installed as part of daml install. To activate them you need to add ~/.daml/zsh to your $fpath, e.g., by adding fpath=(~/.daml/zsh $fpath) to the beginning of your ~/.zshrc before you call compinit.

Daml Script – Experimental

  • Allow running Daml scripts as test-cases. Executing daml test-script --dar mydar.dar will execute all definitions matching the type Script a as test-cases. See #3687 <https://github.com/digital-asset/daml/issues/3687>__.

Reference v2

  • On an exception, shut down everything and crash. Previously, the server would stay in a half-running state.

Newsletter

Join the Daml mailinglist to get occasional updates on announcements and new releases