Chinsay - Smart Clauses (Case Study)

During my time at Chinsay I was responsible for working with the in house document editor to enhance and add new features and functionality to it as required by the clients. This case study follows along with the smart clause feature I led and helped move through to completion, it was a challenging project but one that I felt I learnt a great deal from and found the process highly rewarding and interesting.

What is Chinsay and the Editor?

Photo by Venti Views / Unsplash

Chinsay is a freight digitisation platform that operates a SaaS product that helps organise and keep the whole process digital following strong compliance measures. A key part of the business is the idea of digital contracts for the clients which need to be designed in a way that is specific to the cargo and legal terms agreed, this can vary greatly from contract to contact but there is often a similar theme for the contract text and this is where smart clauses comes in.

Chinsay has a bespoke in-house document editor that is designed specifically for freight contracting. It has a novel drag and drop idea when it comes to the clauses allowing for blocks to be added and arranged in a quick and convenient way. It is however not without challenges as it is not fully WYSIWYG although looking very close which has causes some confusion for clients.

The Problem

Photo by Thom Milkovic / Unsplash

The idea of smart clauses is to allow for documents to add in a range from zero to many predetermined pieces of text (clauses) to complete the contract based on wider workflow datapoints. A good example of this is if the workflow expresses the cost to be greater than $5 million USD which may require additional compliance and some extra legal text which in this case would be added in to the contract automatically rather than having to have it manually added each time.

There was much appetite for this functionality as it helps remove some of the repetitive steps and moves to more automation for the contracting side increasing efficiency and reducing potential human error.

The Approach

Photo by Karl Paul Baldacchino / Unsplash

As we wanted these rule based elements to be as flexible as possible we built the rule contained on top of the basic clauses so they would be considered similar in the editor, reducing the complexity of supporting two very different models. We had to extract some common functionality between the two into a base class and allow the editor to accept both of these and treat them in a similar way. The rule containers however are much simpler than the clauses as they do not contain rich content but just a ruleId and placeholder content so they visually can be displayed in the templates. These containers will be retained in the editor as well, but hidden from the user as they should not be rendered out in the final document. They are required if we want to evaluate the rules and find what is attached to rule which is why they did need to be retained.

As these sections all had RuleIds we would just check the rule Id whilst the document was being generated ahead of editing and pass it through the rule engine. This would check what rules are enforced which could be very general just based on the member or it could be very specific to only affect one type of document in one type of workspace. The rule would evaluate and return an array of UUIDs (GUIDs) or nothing which would then return this evaluated list to the frontend which would then parse these and create the clauses in a similar method to if a user inserted them manually in this section of the page.

Photo by Paul Frenzel / Unsplash

What proved to be challenging was firstly working to change the document data model to accept these new types into the document as prior it was only one typer, this added additional complexity. This was accomplished by creating a common base and using that to make sure both would be compatible in all cases

One of the largest challenges I faced on this project was ensuring that the clause data would always be displayed in the correct order. As there was now a lot of new dynamic aspects it made this rather challenging. Not only did we have the current static clauses that would always exist in the document but the new dynamic clauses would evaluate from zero to many clauses and these needed to be slotted in correctly and added to the overall document order. This problem took a good amount of time as it was a huge amount of logic to processes and I took a TDD approach to it slowly building out the ideas bit by bit to create an evaluation process that built up overtime and as edge cases were hit and considered made it easy to stay on-top of them with solid test coverage and stronger understanding for how the logic worked.

Measuring Success

Photo by Clark Tibbs / Unsplash

It is always important to understand the impact any feature or change has to the wider platform, and how the users end-up interacting with it. It is always a key idea as the only way to really improve and evaluate features is via real world measures and getting this data as soon as possible is always preferable. For the smart clauses we added in telemetry around a number of different aspects such as number of evaluations and clause usage to try to get a better understanding for how widely the feature was being used and how value it is.

The telemetry let us to see a huge uptick in the use of the feature that did in many ways exceed our understanding for how quickly we expected it to be adopted. Along with this we did also see that with our release notes and the mentioning of this new feature other clients gained additional interest and as a result we did end up rolling it out with some of our other clients as well.

What could have been improved?

Photo by Kevin Mueller / Unsplash

As with everything even though the overall project was a success and had a lot of positive outcomes we did learn a lot and found a few areas that did end up lacking. Some of the approaches have proven to be a little more questionable in retrospect such as the idea that we would evaluate and pull the clauses from the rules on the frontend. This of course tied a lot of logic to the frontend and slowed down the loading of the document from the users perspective. It does also mean that if we would have wanted to make this multi-platform this would be repeated logic, although we didn't have this consideration it is still useful to consider and think a bit more about it.

As this was an MVP we cut scope very heavily on the rule editor which is an internal tool. We reasoned that use case for the rules would initially start out fairly basic and grow overtime, however this proved to be an incorrect assumption. As there was heavy pent-up demand for this feature the use cases quickly grew in complexity to an extent that it rapidly out stripped the capabilities of the basic rules editor. This meant that making new rules was challenging and required a lot of background context. At the time it may have been a fair assumption to cut back on internal tooling but it does prove a potential bottleneck for adding in new client rules.

Updates and Future Plans

Photo by Stefan Widua / Unsplash

As this was an MVP we cut back on the scope and as a result didn't include the ability for users to re-evaluate rules. It doesn't seem to have been a major issue for the users but it may have made the product a bit more flexible, although this functionality also adds a lot of questions as to what level it should be implemented. I believe it makes sense to also implement it on an MVP basis starting with just a basic way to manually re-evaluate the rules with the user pressing a button. This can further be enhanced to check for changes in the wider datapoint scope of the workspace and if datapoints had been updated it could automatically re-evaluate itself as well. This of course adds a great deal of complexity and considering we have seen little motivation from clients to even allow re-evaluation this would probably be a step too far.