Lawson: LegalClerk for Lawyers
An AI-powered tool designed to revolutionize the way legal professionals interact with court cases by providing concise summaries and interactive analysis.

All Posts

Introduction

Imagine if lawyers could grasp the essence of complex court cases in mere seconds. This idea hit close to home for our team when we learned about the struggles of one teammate's sister, who is currently studying law. She shared her experience of the daunting challenge of reading through long judgments while preparing for cases. Inspired by her plight, we embarked on a mission to create Lawson. Recognizing this widespread issue, we were inspired to create a solution that could alleviate the burden faced by lawyers and law students alike. Thus, Lawson was born.

Lawson is an AI-powered tool designed to revolutionize the way legal professionals interact with court cases. By leveraging advanced natural language processing and machine learning algorithms, Lawson can analyze court documents and provide concise, relevant summaries in a matter of seconds. Our goal is to streamline the process of legal research, allowing lawyers to focus more on their strategic and client-facing responsibilities rather than getting slowed down by lengthy texts.

The Hashnode hackathon served as our first stop on the voyage, where we had a fruitful brainstorming session about using technology to solve practical issues. Our team set out to design a tool that would improve the accuracy and efficiency of legal research while also saving time. We did this by combining our wide skill set in frontend development, backend development, AI and legal knowledge (of course guided by our friend's sister).

Please read the vercel function limitation before going to the live website.

Here's a demo legal case pdf for trying out lawson: Download

Problem Statement: Defining the Pain Point

Time-Consuming Research

Legal professionals and law students spend countless hours reading through lengthy court judgments. These documents are often complex, dense, and filled with legal jargon, making it difficult to quickly extract the most pertinent information. This exhaustive process eats into valuable time that could be better spent on case strategy, client interaction, and other critical tasks. The time spent on research not only delays legal proceedings but also increases costs for clients.

Information Overload

Court cases and legal documents are extensive and contain vast amounts of information. Lawyers need to sift through numerous pages to find relevant details, which can be overwhelming and lead to cognitive overload. The sheer volume of information increases the risk of missing crucial details that could impact the outcome of a case. This can lead to mistakes, misinterpretations, and ultimately, unfavorable results for clients.

Tech Stack 🛠

Our technology stack includes:

  1. React - Frontend user interface
  2. Next.js - React framework with SSR
  3. TypeScript - Type-safe JavaScript
  4. Python - Backend processing
  5. FastAPI - Fast web framework for APIs
  6. Langchain - LLM application framework
  7. Pinecone - Vector database
  8. Prisma - Database ORM
  9. PostHog - Product analytics
  10. Vercel - Deployment platform

Development Process: The Journey

Our development journey was quite an experience - we had a lot of fun doing the project, and it was great to learn new things. This put us in some undesirable situations, but we are developers, and we found ways to solve those issues in our style. Some of the problems we faced:

Token Limit of the AI Model for Large Judgments

Challenge: One of the significant hurdles we faced was the token limit of our AI model. Legal judgments can be extremely lengthy, often exceeding the token capacity of standard AI models. This limitation made it difficult to process and analyze entire documents effectively.

Solution: To address this issue, we developed a solution using k-means clustering. By segmenting the judgment text into smaller, more manageable clusters, we were able to process each segment independently. Different types of prompts were used to ensure that each cluster captured the essential information. This approach allowed us to maintain the integrity of the analysis while staying within the token limits of the AI model.

Timing Issues in Creating Embeddings

Challenge: Creating embeddings for large volumes of text data was another time-consuming process that posed a challenge. Embeddings are crucial for understanding the context and semantics of the text, but generating them efficiently was a bottleneck in our development workflow.

Solution: We tackled this problem by implementing multithreading. Lawson starts with an input document, typically a legal judgment, which is divided into smaller chunks with text and vector representations. These chunks are batched based on the number of available API keys. Each batch gets a unique API key, and threads are spawned to parallelize embedding creation. The embedding function generates embeddings for each chunk, and the results are formatted and added to a shared resource vector pool. Finally, all vectors are added to a Pinecone index for efficient retrieval. This approach allows Lawson to process large documents efficiently and within API limits.

Utilizing Next.js 14 Server Architecture

Challenge: We decided to use the new Next.js 14 server architecture for the first time. Understanding and implementing server-side rendering (SSR) effectively was a significant challenge, particularly given the tight timeframe of the hackathon.

Solution: By server-side rendering everything except client components, we managed to reduce the amount of JavaScript sent to the client. This optimization not only improved the performance of our application but also provided a smoother user experience. The new Next.js 14 architecture allowed us to leverage modern web development practices, ensuring our solution was both efficient and scalable.

How Lawson Works 🤔

Lawson operates in a simple and user-friendly manner designed to make legal research more efficient for professionals. Here's a detailed look at the process:

  1. Upload Document: The user starts by uploading a document containing the judgment they need to analyze. This document could be a lengthy legal judgment, often dozens of pages long, which would typically require significant time and effort to read and comprehend.

  2. Document Processing: Once the document is uploaded, Lawson immediately begins processing it. This involves breaking the document down into smaller, manageable chunks, each containing text and its vector representation. This step ensures that the document fits within the token limits of AI models, making the processing more efficient and manageable.

  3. Generating Summary: After the document is processed, Lawson generates a concise summary of the judgment. This summary provides the key points and essential information from the document, allowing the user to quickly grasp the main findings and implications of the case without having to read through the entire document.

  4. Interactive Chat for More Details: If the user needs more detailed information beyond the summary, they can engage in an interactive chat with Lawson. This feature allows users to ask specific questions about the judgment and receive detailed answers, making it easy to delve deeper into particular aspects of the case. The chat functionality is designed to be intuitive and responsive, providing users with the information they need in real time.

This straightforward workflow ensures that legal professionals can quickly and efficiently understand court cases, saving valuable time and effort. By combining document processing, summarization, and interactive chat features, Lawson provides a comprehensive tool for legal research and analysis.

Features ✨

User-Friendly Interface

One of the cleanest UIs which is visually pleasing and comfortable.

Instant Summary

Users will get summaries in three different ways and they can also download the summary in PDF format.

Chat Section

Users can ask for details about the judgments in the chat section. They can also copy the generated text and use the speak-aloud function to listen to the text. Users can upload new documents by clicking the plus icon.

History Section

Our history section neatly divides the chat history into today, yesterday, and previous 30 days. Users can easily delete their documents. If the user has already uploaded any document, they can access it by clicking it once.

Future Plans and Limitations 🎯

While Lawson is a powerful tool designed to streamline legal research, there are some limitations we are currently addressing:

Vercel Function Duration Limit

Due to Vercel's function duration limit, users can only upload judgments up to 10 pages. Processing longer documents takes more than the allotted 60 seconds to create vectors, which impacts the usability of Lawson for more extensive cases.

To overcome this limitation, we recommend using Lawson locally, where there are no such restrictions. Running Lawson on your local machine allows you to process longer documents without the time constraints imposed by Vercel.

Setting up locally: https://law-son.vercel.app/setting-up-locally

Future Enhancement: One of the exciting features we plan to introduce is semantic search. This functionality will allow users to input case details and receive the top 5 judgments related to their case. By leveraging advanced natural language processing and machine learning algorithms, Lawson will provide more relevant and precise search results, making legal research even more efficient and accurate.

Thank You

We are committed to continuously improving Lawson based on feedback from the legal community. We built Lawson using all free resources and delivered our best, but there may still be some problems. We encourage users to share their experiences and suggestions to help us refine and enhance the tool.

Team Members