Abstract: Current state-of-the-art have used supervised techniques for answering basic queries in legal domain using latent or hand-labeled features. However, these methods may not address explain ability/interpretability of results. The present application provides systems and methods for generating legal structure instances for prior court case retrieval. Linguistic rules are applied on sentences present in court judgement documents stored in a corpus to identify evidence sentences, testimony sentences, non-evidence, and non-testimony sentences as training data for training a weakly supervised sentence classifier. The trained classifier classifies remaining sentences from corpus to further identify evidence sentences, testimony sentences, non-evidence, and non-testimony sentences. Observation frames are identified from observation verbs present in identified sentences by the rules and classifier. Evidence frames are identified using the observation frames for generating legal structure instances. Relevant judgement documents are retrieved for input query using the legal structure instances based on associated similarity score being computed.
Claims:1. A processor implemented method, comprising:
obtaining, via one or more hardware processors, a corpus comprising a plurality of court judgement documents specific to one or more prior court cases (202);
applying, via the one or more hardware processors, a first set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a first set of evidence sentences corresponding to the one or more prior court cases (204);
applying, via the one or more hardware processors, a second set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify at least one of a first set of testimony sentences and a first set of non-testimony sentences corresponding to the one or more prior court cases (206);
generating, via the one or more hardware processors, training data using the first set of identified evidence sentences, the first set of identified testimony sentences, and the first set of non-testimony sentences (208);
training, via the one or more hardware processors, a supervised sentence classifier, using the generated training data (210);
classifying, by using the trained supervised sentence classifier executed via the one or more hardware processors, remaining sentences from the one or more court judgements comprised in the corpus as at least one of (i) an evidence sentence, (ii) a testimony sentence, (iii) a non-evidence sentence, (iv) a non-testimony sentence, and (v) an evidence and testimony sentence to obtain at least a second set of evidence sentences, a second set of testimony sentences, a first set of non-evidence sentences, a first set of non-testimony sentences, and a first set of evidence and testimony sentences (212);
identifying, by a semantic role labeling technique via the one or more hardware processors, one or more observation frames using one or more observation verbs comprised in at least one of the first set of evidence sentences, the first set of evidence and testimony sentences, and the second set of evidence sentences, the first set of testimony sentences, and the second set of testimony sentences (214);
identifying, via the one or more hardware processors, one or more evidence frames for each of the one or more observation frames, using the semantic role labeling technique (216); and
generating, via the one or more hardware processors, one or more evidence structure instances and one or more testimony structure instances based on the one or more identified evidence frames and the one or more identified observation frames, wherein each of the one or more evidence structure instances and the one or more testimony structure instances corresponds to at least court judgement document from the plurality of court judgement documents (218).
2. The processor implemented method of claim 1, wherein the first set of linguistic rules comprises (i) a sentence containing at least one evidence object, (ii) a sentence containing at least one action verb or at least one observation verb, and (iii) in a dependency tree of a sentence, (a) the at least one evidence object occurring within a subtree rooted at an action verb or an observation verb or (b) an absence of any other verb occurring between the at least one action verb and the at least one evidence object.
3. The processor implemented method of claim 1, further comprising identifying, via the one or more hardware processors, one or more evidence objects for each of the one or more observation frames, wherein the one or more evidence objects are identified by applying an entity identification technique on the first set of evidence sentences and the second set of evidence sentences to obtain one or more entities and annotating the one or more entities in the one or more evidence frames to obtain one or more annotated entities, and wherein the one or more annotated entities serve as the one or more evidence objects being identified.
4. The processor implemented method of claim 1, wherein one or more observation frames not containing a corresponding evidence frame are identified as a stand-alone evidence frame.
5. The processor implemented method of claim 1, further comprising:
obtaining an input query;
representing the obtained input query as an evidence structure instance;
computing a similarity score for (i) the evidence structure instance against and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores;
obtaining an intermediate similarity score from the set of similarity scores;
generating a final similarity score based on the intermediate similarity score and a pre-defined sentence-based similarity score; and
retrieving, for the input query, one or more relevant court judgement documents from the plurality of court judgement document based on the final similarity score.
6. The processor implemented method of claim 5, wherein the similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents.
7. The processor implemented method of claim 1, further comprising:
applying, via the one or more hardware processors, a third set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a set of legal events sentences;
identifying one or more evidence frames from the set of legal events sentences, wherein each of the one or more evidence frames comprises at least one legal event; and
generating one or more legal event structure instances based on the one or more evidence frames, wherein each legal event structure instance comprises at least one corresponding evidence frame.
8. A system (100), comprising:
a memory (102) storing instructions;
one or more communication interfaces (106); and
one or more hardware processors (104) coupled to the memory (102) via the one or more communication interfaces (106), wherein the one or more hardware processors (104) are configured by the instructions to:
obtain a corpus comprising a plurality of court judgement documents specific to one or more prior court cases;
apply a first set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a first set of evidence sentences corresponding to the one or more prior court cases;
apply a second set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify at least one of a first set of testimony sentences and a first set of non-testimony sentences corresponding to the one or more prior court cases;
generate training data using the first set of identified evidence sentences, the first set of identified testimony sentences, and the first set of non-testimony sentences;
train a supervised sentence classifier, using the generated training data;
classify, by using the trained supervised sentence classifier, remaining sentences from the one or more court judgements comprised in the corpus as at least one of (i) an evidence sentence, (ii) a testimony sentence, (iii) a non-evidence sentence, (iv) a non-testimony sentence, and (v) an evidence and testimony sentence to obtain at least a second set of evidence sentences, a second set of testimony sentences, a first set of non-evidence sentences, a first set of non-testimony sentences, and a first set of evidence and testimony sentences;
identify, by a semantic role labeling technique, one or more observation frames using one or more observation verbs comprised in at least one of the first set of evidence sentences, the first set of evidence and testimony sentences, and the second set of evidence sentences, the first set of testimony sentences, and the second set of testimony sentences;
identify one or more evidence frames for each of the one or more observation frames, using the semantic role labeling technique; and
generate one or more evidence structure instances and one or more testimony structure instances based on the one or more identified evidence frames and the one or more identified observation frames, wherein each of the one or more evidence structure instances and the one or more testimony structure instances corresponds to at least court judgement document from the plurality of court judgement documents.
9. The system of claim 8, wherein the first set of linguistic rules comprises (i) a sentence containing at least one evidence object, (ii) a sentence containing at least one action verb or at least one observation verb, and (iii) in a dependency tree of a sentence, (a) the at least one evidence object occurring within a subtree rooted at an action verb or an observation verb or (b) an absence of any other verb occurring between the at least one action verb and the at least one evidence object.
10. The system of claim 8, wherein the one or more hardware processors are further configured to identify one or more evidence objects for each of the one or more observation frames, wherein the one or more evidence objects are identified by applying an entity identification technique on the first set of evidence sentences and the second set of evidence sentences to obtain one or more entities and annotating the one or more entities in the one or more evidence frames to obtain one or more annotated entities, and wherein the one or more annotated entities serve as the one or more evidence objects being identified.
11. The system of claim 8, wherein one or more observation frames not containing a corresponding evidence frame are identified as a stand-alone evidence frame.
12. The system of claim 8, wherein the one or more hardware processors are further configured to:
obtain an input query;
represent the obtained input query as an evidence structure instance;
compute a similarity score for (i) the evidence structure instance against and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores;
obtain an intermediate similarity score from the set of similarity scores;
generate a final similarity score based on the intermediate similarity score and a pre-defined sentence-based similarity score; and
retrieve, for the input query, one or more relevant court judgement documents from the plurality of court judgement document based on the final similarity score.
13. The system of claim 12, wherein the similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents.
14. The system of claim 8, wherein the one or more hardware processors are further configured to:
apply a third set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a set of legal events sentences;
identify one or more evidence frames from the set of legal events sentences, wherein each of the one or more evidence frames comprises at least one legal event; and
generate one or more legal event structure instances based on the one or more evidence frames, wherein each legal event structure instance comprises at least one corresponding evidence frame. , Description:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
SYSTEM AND METHOD FOR GENERATING LEGAL STRUCTURE INSTANCES FOR PRIOR COURT CASE RETRIEVAL
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
TECHNICAL FIELD
[001] The disclosure herein generally relates to legal structure instances generation, and, more particularly, to system and method for generating legal structure instances for prior court case retrieval.
BACKGROUND
[002] Evidence(s) typically based on documents (e.g., letter, receipt, report, agreements, affidavits) and physical objects (e.g., knife, guns, photos, phone call data records) – are often used by lawyers in their arguments during a court case. The observations made through the evidence(s) may have a significant effect on the judges’ final decision. In order to develop a deeper understanding of the past court cases, it is valuable to identify various evidence(s) discussed in these cases and the observations which are made about them or through them. Such information about evidence(s) has several applications such as understanding and representing legal arguments, determining strengths and weaknesses of those arguments, identifying relevant past cases in which similar evidence(s) were discussed, etc. In the prior research works, there have been approaches proposed considering only limited information of the court cases and/or judgements but these approaches have led to loss of key information regarding evidence(s) mentioned in a case.
SUMMARY
[003] Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one aspect, there is provided a processor implemented method for generating legal structure instances for prior court case retrieval. The method comprises obtaining, via one or more hardware processors, a corpus comprising a plurality of court judgement documents specific to one or more prior court cases; applying, via the one or more hardware processors, a first set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a first set of evidence sentences corresponding to the one or more prior court cases; applying, via the one or more hardware processors, a second set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify at least one of a first set of testimony sentences and a first set of non-testimony sentences corresponding to the one or more prior court cases; generating, via the one or more hardware processors, training data using the first set of identified evidence sentences, the first set of identified testimony sentences, and the first set of non-testimony sentences; training, via the one or more hardware processors, a supervised sentence classifier, using the generated training data; classifying, by using the trained supervised sentence classifier executed via the one or more hardware processors, remaining sentences from the plurality of court judgement documents comprised in the corpus as at least one of (i) an evidence sentence, (ii) a testimony sentence, (iii) a non-evidence sentence, (iv) a non-testimony sentence, and (v) an evidence and testimony sentence to obtain at least a second set of evidence sentences, a second set of testimony sentences, a first set of non-evidence sentences, a first set of non-testimony sentences, and a first set of evidence and testimony sentences; identifying, by a semantic role labeling technique via the one or more hardware processors, one or more observation frames using one or more observation verbs comprised in at least one of the first set of evidence sentences, the first set of evidence and testimony sentences, and the second set of evidence sentences, the first set of testimony sentences, and the second set of testimony sentences; identifying, via the one or more hardware processors, one or more evidence frames for each of the one or more observation frames, using the semantic role labeling technique; and generating, via the one or more hardware processors, one or more evidence structure instances and one or more testimony structure instances based on the one or more identified evidence frames and the one or more identified observation frames, wherein each of the one or more evidence structure instances and the one or more testimony structure instances corresponds to at least court judgement document from the plurality of court judgement documents.
[004] In an embodiment, the first set of linguistic rules comprises (i) a sentence containing at least one evidence object, (ii) a sentence containing at least one action verb or at least one observation verb, and (iii) in a dependency tree of a sentence, (a) the at least one evidence object occurring within a subtree rooted at an action verb or an observation verb or (b) an absence of any other verb occurring between the at least one action verb and the at least one evidence object.
[005] In an embodiment, the method further comprises identifying, via the one or more hardware processors, one or more evidence objects for each of the one or more observation frames. The one or more evidence objects are identified by applying an entity identification technique on the first set of evidence sentences and the second set of evidence sentences to obtain one or more entities and annotating the one or more entities in the one or more evidence frames to obtain one or more annotated entities wherein the one or more annotated entities serve as the one or more evidence objects being identified.
[006] In an embodiment, the one or more observation frames not containing a corresponding evidence frame are identified as a stand-alone evidence frame.
[007] In an embodiment, the method further comprises obtaining an input query; representing the obtained input query as an evidence structure instance; computing a similarity score for (i) the evidence structure instance against and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores; obtaining an intermediate similarity score from the set of similarity scores; generating a final similarity score based on the intermediate similarity score and a pre-defined sentence-based similarity score; and retrieving, for the input query, one or more relevant court judgement documents from the plurality of court judgement document based on the final similarity score.
[008] In an embodiment, the similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents.
[009] In an embodiment, the method further comprises applying, via the one or more hardware processors, a third set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a set of legal events sentences; identifying one or more evidence frames from the set of legal events sentences, wherein each of the one or more evidence frames comprises at least one legal event; and generating one or more legal event structure instances based on the one or more evidence frames, wherein each legal event structure instance comprises at least one corresponding evidence frame.
[010] In another aspect, there is provided a system for generating legal structure instances for prior court case retrieval. The system comprises: a memory storing instructions; one or more communication interfaces; and one or more hardware processors coupled to the memory via the one or more communication interfaces, wherein the one or more hardware processors are configured by the instructions to: obtaining, via one or more hardware processors, a corpus comprising a plurality of court judgement documents specific to one or more prior court cases; apply a first set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a first set of evidence sentences corresponding to the one or more prior court cases; apply a second set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify at least one of a first set of testimony sentences and a first set of non-testimony sentences corresponding to the one or more prior court cases; generate training data using the first set of identified evidence sentences, the first set of identified testimony sentences, and the first set of non-testimony sentences; train a supervised sentence classifier, using the generated training data; classify, by using the trained supervised sentence classifier, remaining sentences from the plurality of court judgement documents comprised in the corpus as at least one of (i) an evidence sentence, (ii) a testimony sentence, (iii) a non-evidence sentence, (iv) a non-testimony sentence, and (v) an evidence and testimony sentence to obtain at least a second set of evidence sentences, a second set of testimony sentences, a first set of non-evidence sentences, a first set of non-testimony sentences, and a first set of evidence and testimony sentences; identify, by a semantic role labeling technique, one or more observation frames using one or more observation verbs comprised in at least one of the first set of evidence sentences, the first set of evidence and testimony sentences, and the second set of evidence sentences, the first set of testimony sentences, and the second set of testimony sentences; identify one or more evidence frames for each of the one or more observation frames, using the semantic role labeling technique; and generate one or more evidence structure instances and one or more testimony structure instances based on the one or more identified evidence frames and the one or more identified observation frames, wherein each of the one or more evidence structure instances and the one or more testimony structure instances corresponds to at least court judgement document from the plurality of court judgement documents.
[011] In an embodiment, the first set of linguistic rules comprises (i) a sentence containing at least one evidence object, (ii) a sentence containing at least one action verb or at least one observation verb, and (iii) in a dependency tree of a sentence, (a) the at least one evidence object occurring within a subtree rooted at an action verb or an observation verb or (b) an absence of any other verb occurring between the at least one action verb and the at least one evidence object.
[012] In an embodiment, the one or more hardware processors are further configured to identify one or more evidence objects for each of the one or more observation frames. The one or more evidence objects are identified by applying an entity identification technique on the first set of evidence sentences and the second set of evidence sentences to obtain one or more entities and annotating the one or more entities in the one or more evidence frames to obtain one or more annotated entities wherein the one or more annotated entities serve as the one or more evidence objects being identified.
[013] In an embodiment, one or more observation frames not containing a corresponding evidence frame are identified as a stand-alone evidence frame.
[014] In an embodiment, the one or more hardware processors are further configured to obtain an input query; represent the obtained input query as an evidence structure instance; compute a similarity score for (i) the evidence structure instance against and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores; obtain an intermediate similarity score from the set of similarity scores; generate a final similarity score based on the intermediate similarity score and a pre-defined sentence-based similarity score; and retrieve, for the input query, one or more relevant court judgement documents from the plurality of court judgement document based on the final similarity score.
[015] In an embodiment, the similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents.
[016] In an embodiment, the one or more hardware processors are further configured to apply a third set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a set of legal events sentences; identify one or more evidence frames from the set of legal events sentences, wherein each of the one or more evidence frames comprises at least one legal event; and generate one or more legal event structure instances based on the one or more evidence frames, wherein each legal event structure instance comprises at least one corresponding evidence frame.
[017] In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause a method for generating legal structure instances for prior court case retrieval. The method comprises obtaining, via the one or more hardware processors, a corpus comprising a plurality of court judgement documents specific to one or more prior court cases; applying, via the one or more hardware processors, a first set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a first set of evidence sentences corresponding to the one or more prior court cases; applying, via the one or more hardware processors, a second set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify at least one of a first set of testimony sentences and a first set of non-testimony sentences corresponding to the one or more prior court cases; generating, via the one or more hardware processors, training data using the first set of identified evidence sentences, the first set of identified testimony sentences, and the first set of non-testimony sentences; training, via the one or more hardware processors, a supervised sentence classifier, using the generated training data; classifying, by using the trained supervised sentence classifier executed via the one or more hardware processors, remaining sentences from the plurality of court judgement documents comprised in the corpus as at least one of (i) an evidence sentence, (ii) a testimony sentence, (iii) a non-evidence sentence, (iv) a non-testimony sentence, and (v) an evidence and testimony sentence to obtain at least a second set of evidence sentences, a first set of testimony sentences, a first set of non-evidence sentences, a first set of non-testimony sentences, and a second set of evidence and testimony sentences; identifying, by a semantic role labeling technique via the one or more hardware processors, one or more observation frames using one or more observation verbs comprised in at least one of the first set of evidence sentences, the first set of evidence and testimony sentences, and the second set of evidence sentences, the first set of testimony sentences, and the second set of testimony sentences; identifying, via the one or more hardware processors, one or more evidence frames for each of the one or more observation frames, using the semantic role labeling technique; and generating, via the one or more hardware processors, one or more evidence structure instances and one or more testimony structure instances based on the one or more identified evidence frames and the one or more identified observation frames, wherein each of the one or more evidence structure instances and the one or more testimony structure instances corresponds to at least court judgement document from the plurality of court judgement documents.
[018] In an embodiment, the first set of linguistic rules comprises (i) a sentence containing at least one evidence object, (ii) a sentence containing at least one action verb or at least one observation verb, and (iii) in a dependency tree of a sentence, (a) the at least one evidence object occurring within a subtree rooted at an action verb or an observation verb or (b) an absence of any other verb occurring between the at least one action verb and the at least one evidence object.
[019] In an embodiment, the method further comprises identifying, via the one or more hardware processors, one or more evidence objects for each of the one or more observation frames. The one or more evidence objects are identified by applying an entity identification technique on the first set of evidence sentences and the second set of evidence sentences to obtain one or more entities and annotating the one or more entities in the one or more evidence frames to obtain one or more annotated entities wherein the one or more annotated entities serve as the one or more evidence objects being identified.
[020] In an embodiment, the one or more observation frames not containing a corresponding evidence frame are identified as a stand-alone evidence frame.
[021] In an embodiment, the method further comprises obtaining an input query; representing the obtained input query as an evidence structure instance; computing a similarity score for (i) the evidence structure instance against and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores; obtaining an intermediate similarity score from the set of similarity scores; generating a final similarity score based on the intermediate similarity score and a pre-defined sentence-based similarity score; and retrieving, for the input query, one or more relevant court judgement documents from the plurality of court judgement document based on the final similarity score.
[022] In an embodiment, the similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents.
[023] In an embodiment, the method further comprises applying, via the one or more hardware processors, a third set of linguistic rules on one or more sentences comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a set of legal events sentences; identifying one or more evidence frames from the set of legal events sentences, wherein each of the one or more evidence frames comprises at least one legal event; and generating one or more legal event structure instances based on the one or more evidence frames, wherein each legal event structure instance comprises at least one corresponding evidence frame.
[024] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[025] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
[026] FIG. 1 illustrates an exemplary block diagram of a system for generating legal structure instances for prior court case retrieval, in accordance with an embodiment of the present disclosure.
[027] FIGS. 2A and 2B illustrate an exemplary flow diagram of a method for generating legal structure instances for prior court case retrieval using the system of FIG. 1, in accordance with an embodiment of the present disclosure.
[028] FIG. 3 depicts an architecture of a bidirectional Long Short-Term Memory (BiLSTM)-based multi-label sentence classifier as implemented by the system of FIG. 1 for identification of various sentences from a corpus containing a plurality of court judgement documents specific to one or more prior court cases, in accordance with an embodiment of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
[029] Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
[030] Evidence(s) typically based on documents (e.g., letter, receipt, report, agreements, affidavits) and physical objects (e.g., knife, guns, photos, phone call data records) – are often used by lawyers in their arguments during a court case. Observations derived from evidence(s) may have a significant effect on the judges’ final decision. For a deeper understanding of the past court cases, it is important to identify various evidence(s) discussed in these cases and the observations derived. Such information about evidence(s) has several applications such as understanding and representing legal arguments, determining strengths and weaknesses of those arguments, identifying relevant past cases in which similar evidence(s) were discussed, etc. In the prior research works, there have been approaches proposed considering only limited information of the court cases and/or judgements but these approaches have led to loss of key information regarding evidence(s) mentioned in a case.
[031] In the present disclosure, embodiments provide system and method that implement Natural Language Processing (NLP) based techniques for extracting information regarding evidence witness, and other legal events mentioned in court judgement documents. More specifically, the present disclosure provides a method to represent this information in a rich semantic structure – Evidence Structure defined as an Evidence Information Model. Along with evidence(s), the present disclosure also identifies and represents Witness Testimonies, and other legal events using the same Information Model (e.g., Evidence Information Model). Initially, a two-step approach is discussed for identifying evidence and testimony sentences. In the first step, linguistic rules are used to determine whether a sentence contains any evidence or testimony information. Here, rules proposed in conventional research work of Ghosh et al. (refer ‘K. Ghosh, S. Pawar, G. Palshikar, P. Bhattacharyya, V. Varma, Retrieval of prior court cases using witness testimonies, JURIX (2020).’) are used for identification of witness testimonies and new rules are designed by the system for identification of evidence sentences. In the second step, a Weakly Supervised Sentence Classifier is trained whose training data is automatically created using the sentences identified by the linguistic rules. It is a multi-label classifier which predicts whether any sentence contains an evidence or witness testimony, or legal events or all the three. Once all the evidence sentences, testimony sentences, and legal events sentences are identified from the corpus of court judgements, the system and method further implement a semantic Role Labelling (SRL) based technique to automatically instantiate evidence structures, testimony structures, and legal events structures for these sentences.
[032] To demonstrate effectiveness of the evidence structure, testimony structure, legal event structure, the present disclosure discusses its use in the prior case retrieval application. In particular, the system and method implement a matching algorithm for computing semantic similarity between a query and a sentence in a court judgement document. This algorithm makes use of the evidence structure, testimony structure, legal event structure or combinations thereof in which both the query and the sentence are represented, resulting in a semantically sound similarity score between them.
[033] Previously, Ghosh et al. proposed to identify witness testimonies from court case documents and used them for retrieving relevant prior cases. It is observed that considering only witness testimonies leads to loss of key information regarding evidence(s) and any other legal events mentioned in a case. Hence, system and method of the present disclosure identify and use information about various evidence(s), testimonies, and any other legal events mentioned in the case documents leading to much better prior case retrieval performance as demonstrated in the experiments section. Moreover, Ghosh et al. has used a much-limited semantic structure to represent information regarding events mentioned in witness testimonies. This structure does not capture important semantic information like whether event is negated, what are the causes behind the event, the manner in which event has taken place, and the like. Present disclosure creates a much richer semantic structure addressing these limitations and provides a suitable semantic matching algorithm for that structure.
[034] Referring now to the drawings, and more particularly to FIGS. 1 through 3, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
[035] FIG. 1 illustrates an exemplary block diagram of a system 100 for generating legal structure instances for prior court case retrieval, in accordance with an embodiment of the present disclosure. The system 100 may also be referred as a legal structure instances generator or prior court case retrieval system and may be interchangeably used herein. In an embodiment, the system 100 includes one or more processors 104, communication interface device(s) or input/output (I/O) interface(s) 106, and one or more data storage devices or memory 102 operatively coupled to the one or more processors 104. The one or more processors 104 may be one or more software processing modules and/or hardware processors. In an embodiment, the hardware processors can be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processor(s) is configured to fetch and execute computer-readable instructions stored in the memory. In an embodiment, the device 100 can be implemented in a variety of computing systems, such as laptop computers, notebooks, hand-held devices such as mobile communication devices/smart phones, workstations, mainframe computers, servers, a network cloud, and the like.
[036] The I/O interface device(s) 106 can include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like and can facilitate multiple communications within a wide variety of networks N/W and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. In an embodiment, the I/O interface device(s) can include one or more ports for connecting a number of devices to one another or to another server.
[037] The memory 102 may include any computer-readable medium known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable programmable ROM, flash memories, hard disks, optical disks, and magnetic tapes. In an embodiment a database 108 can be stored in the memory 102, wherein the database 108 may comprise, but are not limited to corpus of a plurality of court judgement documents specific to one or more prior court cases. In an embodiment, the memory 102 may store various set of linguistic rules, evidence sentences being identified, testimony sentences being identified, non-testimony sentences being identified, non-evidence sentences being identified, other legal events sentences being identified. The memory 102 further stores training data which is generated using the above stored information. Further, the memory stores various observation frames, evidence frames, observation verbs, evidence structure instances, testimony structure instances, legal events instances, similarity score(s), intermediate score, final similarity score, and the like. The memory 102 further comprises (or may further comprise) information pertaining to input(s)/output(s) of each step performed by the systems and methods of the present disclosure. In other words, input(s) fed at each step and output(s) generated at each step are comprised in the memory 102 and can be utilized in further processing and analysis.
[038] FIGS. 2A and 2B, with reference to FIG. 1, illustrate an exemplary flow diagram of a method for generating legal structure instances for prior court case retrieval using the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. In an embodiment, FIGS. 2A and 2B may be collectively referred as FIG. 2 and interchangeably used herein. In an embodiment, the system(s) 100 comprises one or more data storage devices or the memory 102 operatively coupled to the one or more hardware processors 104 and is configured to store instructions for execution of steps of the method by the one or more processors 104. The steps of the method of the present disclosure will now be explained with reference to the components of the system 100 as depicted in FIG. 1, and a classifier as depicted in FIG. 3. In an embodiment of the present disclosure, at step 202, the one or more hardware processors 104 obtain a corpus comprising a plurality of court judgement documents specific to one or more prior court cases. In an embodiment, the corpus herein refers to a dataset comprising 30032 court (documents) containing 4,111,091 sentences where average sentence length is 31 words and standard deviation of 24. Reference to the dataset of the Indian Supreme Court judgements from years 1952 to 2012 is http://liiofindia.org/in/cases/cen/INSC/.
[039] At step 204 of the present disclosure, the one or more hardware processors 104 apply a first set of linguistic rules on one or more sentences (e.g., say a first set of sentences) comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a first set of evidence sentences, and a first set of evidence and testimony sentences corresponding to the one or more prior court cases. As there are no publicly annotated datasets for identification of Evidence and Testimony sentences, the present disclosure and its system and method rely on linguistic rules to identify these sentences with high precision. The first set of linguistic rules comprises (i) a sentence containing at least one evidence object, (ii) a sentence containing at least one action verb or at least one observation verb, and (iii) in a dependency tree of a sentence, (a) the at least one evidence object occurring within a subtree rooted at an action verb or an observation verb or (b) an absence of any other verb occurring between the at least one action verb and the at least one evidence object. The above first set of linguistic rules for identifying evidence sentences are described in below Table 1 by way of examples:
Table 1
Any sentence ?? should satisfy the following conditions in order to be identified as an Evidence Sentence:
E-R1: ?? should contain at least one Evidence Object as defined in Section 2.2. The list of words corresponding to evidence objects is created automatically by using WordNet hypernym structure. We create a list of all words for which the following WordNet synsets are ancestors in hypernym tree – artifact (e.g., gun, clothes), document (e.g., report, letter), substance (e.g., kerosene, blood). This list is looked up to identify evidence objects in a sentence.
E-R2: ?? should contain at least one action verb from a pre-defined set of verbs like tamper, kill, sustain OR ?? should contain at least one observation verb from a pre-defined set of verbs like report, show, find. Both the pre-defined sets of verbs are prepared by observing multiple example sentences containing evidence objects.
E-R3: In the dependency tree of ??, the evidence object (identified by E-R1) should occur within the subtree rooted at the action or observation verb (identified by E-R2) AND there should not be any other verb (except auxiliary verbs like has, been, was, were, is) occurring between the two. This ensures that the evidence object always lies within the verb phrase headed by the action or observation verb.
[040] The first set of linguistic rules identified 62,310 sentences as evidence(s) from the corpus. As there is no annotated dataset, in order to estimate the precision of the linguistic rules random sampling technique was implemented by the system 100. The system 100 selected a set of 100 random sentences identified as Evidence by the first set of linguistic rules and were verified by a human expert. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the above examples of the first set of linguistic rules shall not be construed as limiting the scope of the present disclosure.
[041] Referring to steps of FIG. 2, at step 206 of the present disclosure, the one or more hardware processors 104 apply a second set of linguistic rules on one or more sentences (e.g., say a second set of sentences) comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify at least one of a first set of testimony sentences and a first set of non-testimony sentences corresponding to the one or more prior court cases. The second set of linguistic rules may be applied on the same set of sentences (e.g., the first set of sentences) as that of the sentences identified during application of the first set of linguistic rules, in one example embodiment. The second set of linguistic rules may be applied on at least a subset of sentences as that of the sentences identified during application of the first set of linguistic rules, in another example embodiment. The sentences (e.g., the second set of sentences) identified for applying the second set of linguistic rules may be different from the set of sentences (e.g., the first set of sentences) identified during application of the first set of linguistic rules, in yet another embodiment of the present disclosure. In the present disclosure, the system 100 and method described herein used the linguistic rules proposed in Ghosh et al. for identifying testimony and non-testimony sentences. These second set of linguistic rules identified 36,473 sentences as testimony sentences (e.g., the first set of testimony sentences) and 14,234 sentences as non-testimony sentences (e.g., the first set of non-testimony sentences) from the same corpus. It is to be understood by a person having ordinary skill in the art or person skilled in the art that the steps 204 and 206 can either be performed in a sequential order or in parallel, in one embodiment of the present disclosure. The expression ‘first set of linguistic rules’ may also be referred as ‘evidence rules’ and can be interchangeably used herein. Similarly, the expression ‘second set of linguistic rules’ may also be referred as ‘testimony rules’ and can be interchangeably used herein.
[042] At step 208 of the present disclosure, the one or more hardware processors 104 generate training data using the first set of identified evidence sentences, the first set of identified testimony sentences, and the first set of non-testimony sentences. At step 210 of the present disclosure, the one or more hardware processors 104 train a supervised sentence classifier, using the generated training data. In the experiments conducted by the present disclosure, it has been observed by the system and method described herein that although the linguistic rules identify evidence and testimony sentences with high precision, there could be a possibility of missing to identify some sentences which should have been identified as evidence or testimony. Hence, a supervised sentence classifier is trained by the system 100 to improve overall recall of identification of evidence and testimony sentences. The classifier used is a bidirectional Long Short-Term Memory (BiLSTM)-based multi-label sentence classifier whose architecture is depicted in FIG. 3. More specifically, FIG. 3, with reference to FIGS. 1 through 2B, depicts an architecture of a bidirectional Long Short-Term Memory (BiLSTM)-based multi-label sentence classifier as implemented by the system 100 of FIG. 1 for identification of various sentences from the corpus containing the plurality of court judgement documents specific to the one or more prior court cases, in accordance with an embodiment of the present disclosure. This classifier is weakly supervised since its training data is automatically created using the sentences identified by the linguistic rules as follows. The classifier has two outputs - i) first output predicts a binary label indicating whether the sentence contains evidence or not and ii) second output predicts a binary label indicating whether the sentence contains testimony or not. The training data is provided by way of example below:
1. 1824 sentences are labelled as evidence sentences and testimony sentences both. These sentences are identified as evidence as well as testimony by both the sets of linguistic rules (e.g., the first set and the second set of linguistic rules).
2. 60486 sentences are labelled as evidence and non-testimony sentences. These sentences are identified as evidence by the rules but not as testimony sentences.
3. 34649 sentences are labelled as non-evidence and testimony sentences. These sentences are identified as testimony by the rules but not as evidence.
4. 14234 sentences are labelled as non-evidence and non-testimony sentences. These sentences are identified as non-testimony by the rules and not identified as evidence.
[043] Once the classifier is trained, the system 100 uses it to classify all the remaining sentences in the corpus. These sentences are neither identified evidence by the evidence rules nor as testimony/non-testimony by the testimony rules. In other words, at step 212 of the present disclosure, the one or more hardware processors 102 classify, by using the trained supervised sentence classifier (e.g., also referred as BiLSTM-based multi-label sentence classifier or classifier and may be interchangeably used herein) remaining sentences from the plurality of court judgement documents comprised in the corpus as at least one of (i) an evidence sentence, (ii) a testimony sentence, (iii) a non-evidence sentence, (iv) a non-testimony sentence, and (v) an evidence and testimony sentence to obtain at least a second set of evidence sentences (e.g., also referred as classifier identified evidence sentences), a second set of testimony sentences (classifier identified testimony sentences), a first set of non-evidence sentences, a first set of non-testimony sentences, and a second set of evidence and testimony sentences. More specifically, the system 100 used a prediction confidence, top 10,000 sentences classified as evidence and top 5,000 sentences classified as testimony were selected. Table 4 shows some examples of sentences identified as evidence by the classifier but not by the linguistic rules. To estimate the precision, the system 100 employed the random sampling technique wherein 100 sentences were randomly selected each from these high confidence evidence and testimony sentences which were verified accordingly (e.g., via a human expert). The precision of 72% is observed for evidence sentences and 68% for testimony sentences. The precision of the sentence classifier is lower as compared to the rules because, it is applied on a more difficult set of sentences for which the linguistic rules fail to identify any label. At the end of this two-step process (linguistic rules followed by the sentence classifier), a total of 112,401 sentences were identified either as evidence or as testimony.
Table 2
S1: Raju PW2 took Preeti into the bathroom at the instance of Accused No.1 who cut a length of wire of washing machine and used it to choke her to death, who however, survived.
S2: Raju PW2 took Satyabhamabai Sutar in the kitchen where the accused No.1 had already reached and was washing the blood-stained knife.
S3: Hemlata was also killed by inflicting knife injuries.
S4: Accused No.2 and Raju PW2 took the child into the room where Meerabai was lying dead in the pool of blood.
S5: Accused No.2 gave her blows by putting his knees on her stomach and when she was immobilised this way, the Accused No.1 gave her knife blows on her neck with the result she also died.
S6: Almirahs found in the flat were emptied to the extent the accused could put articles and other cash and valuables in the air-bag obtained from the said flat.
S7: Blood-stained clothes of Accused No.2 were put in the air-bag along with stolen articles.
[044] At step 214 of the present disclosure, the one or more hardware processors 104 identify, by using a semantic role labeling technique, one or more observation frames using one or more observation verbs comprised in at least one of the first set of evidence sentences, the first set of evidence and testimony sentences, and the second set of evidence sentences, the first set of testimony sentences, and the second set of testimony sentences, and the second set of evidence and testimony sentences.
[045] At step 216 of the present disclosure, the one or more hardware processors 104 identify one or more evidence frames for each of the one or more observation frames, using the semantic role labeling technique.
[046] At step 218 of the present disclosure, the one or more hardware processors 104 generate one or more evidence structure instances and one or more testimony structure instances based on the one or more identified evidence frames and the one or more identified observation frames, wherein each of the one or more evidence structure instances and the one or more testimony structure instances corresponds to at least court judgement document from the plurality of court judgement documents.
[047] The above steps 214 through 218 can be better understood by way of following description. Prior to describing the steps 214 through 218, the present disclosure briefly describes semantic role labelling and how it is used to define and generate the evidence structure instances, testimony structure instances, and legal event structure instances.
[048] Semantic Role Labelling (SRL) is a technique in Natural Language Processing that identifies verbs/predicates in a sentence, finds phrases connected to this predicate and assigns an appropriate semantic role to each these phrases. By doing so, SRL helps machines to understand the roles of important words within a sentence. Following are some key semantic roles identified for a verb/predicate (often corresponding to an action or event) by SRL techniques:
ARG0: proto-agent or someone who performs the action denoted by the verb.
ARG1: proto-patient or someone on whom the action is performed on.
ARGM-TMP: the time when the event took place.
ARGM-CAU: the cause of the action.
ARGM-PRP: the purpose of the action.
ARGM-LOC: the location where the event took place.
ARGM-MNR: the way the action took place.
ARGM-NEG: the word indicating that the action did not take place.
Consider the following example sentence:
On August 25, 1965, the bank dishonored the cheque (also referred as check) due to insufficient balance. The various semantic roles to the verb dishonored are annotated as follows:
[ARGM - TMP: On August 25, 1965], [ARG0: the bank]
[V: dishonored] [ARG1: the cheque] [ARGM - CAU: due to insufficient balance].
[049] A sentence can have one or more predicates depending on the complexity. The system 100 uses a pre-trained model described by Shi et al (e.g., refer ‘P. Shi, J. Lin, Simple BERT models for relation extraction and semantic role labeling, CoRRabs/1904.05255 (2019). URL: http://arxiv.org/abs/1904.05255. arXiv:1904.05255’) for labelling the identified sentences herein. The system 100 uses the predicates and corresponding arguments obtained from SRL to instantiate evidence structure for the queries and candidate sentences.
[050] Evidence Information Model (EIM) or also referred evidence structure instance which represents every evidence sentence giving information about one or more evidence objects (EO) in an evidence structure. The system 100 considers an evidence object to be one of the objects that are presented by the counsels to the judge along with the information and findings about the crime. It is thus, a physical entity that can furnish some degree of support, contradiction, or opposition to some legal arguments. Some examples of evidence objects are as follows:
1. Documents (autopsy report, post-mortem report, affidavit, letter, cheque, agreement, petition, FIR, signature)
2. Material objects (gun, bullet, clothes, kerosene can)
3. Substances (poison, alcohol, kerosene)
In Indian court case documents, such evidence objects are also represented in the judgement document as Exhibit A, Ex. 2, Evidence 23, and so on.
[051] On these lines, an evidence sentence in the present disclosure refers to any sentence containing one or more evidence objects relevant to the current case but do not consist of (i) any witness testimony which is not verifiable, (ii) legal argumentation, (iii) a reference to some prior case or some Act or Section, (iv) directions or instructions given by the court or judge. A formal definition of the evidence structure is now detailed out herein. For every evidence present in an evidence sentence, the structure consists of an optional observation frame (OF) and a mandatory evidence frame (EF). The observation frame (OF) represents the source of the information and the agent disclosing it. This information is optional as it may or may not be explicitly stated in a sentence. It consists of the following arguments:
1. ObserverVerb or OV: The verb indicating the observation/discovery/disclosure (e.g., found, revealed, stated)
2. ObserverAgent or A0: The source disclosing the information (e.g., person, agency, authority)
3. EvidenceObject or EO: The Evidence Object in focus (e.g., post-mortem report, FIR, letter)
The evidence frame (EF) captures details about the evidence itself through the following arguments:
1. EvidenceVerb or EV: the main verb of any action, event or fact mentioned in a sentence or revealed by the Evidence Object (e.g., killed, forged, escaped)
2. Agent or A0: someone who initiates the action indicated by the EvidenceVerb (e.g., the accused Ram, ABC Pvt. Ltd.)
3. Patient or A1: someone who undergoes the action indicated by the EvidenceVerb. (e.g., the deceased, a cheque of Rs. 3, 200, his wife)
4. Location or LOC: location where the action took place (e.g., in the bedroom, at the bank, in Malaysia)
5. Time or TMP: timestamp of the action (e.g., about 12 hours back, in the morning, on Monday)
6. Cause or CAU: cause of the action (e.g., due to dowry, as a result of the CBI enquiry, out of sheer spite)
7. Manner or MNR: manner in which the action took place (e.g., as per the challan, fraudulently, willfully)
[052] Table 3 shows examples of some evidence sentences along with the corresponding evidence structure instances. In some cases, observation frame may be empty due to absence of ObservationVerb. In such cases, EvidenceObject may be present as a part of any argument in Evidence Frame, e.g., the cheque is present as ??1 in the Evidence Frame of the first sentence in Table 3.
Table 3
The bank dishonoured the cheque due to insufficient balance.
• EF = [???? = dishonoured, ??0 = The bank, ??1 = the cheque, ?????? = due to insufficient balance]
The report revealed that organo-phosphorus compound was found in the stomach, small intestines, large intestines, liver, spleen, kidney, and brain of the deceased.
• OF = [???? = revealed, ???? = The report]
EF = [???? = found, ?????? = in the stomach, small intestines, large intestines, liver, spleen, kidney and brain of the deceased]
The Magistrate found prima facie evidence that the appellant had fraudulently used in the Civil Suit forged cheque and committed him to the Sessions for trial ·
• OF = [OV = found, ???? = The Magistrate, ???? = prima facie evidence]
EF = [???? = used, ??0 = the appellant, ??1 = forged cheque, ?????? = in the Civil Suit]
The prosecution case was that though the rough cash book showed that on September 29, 1950, a sum of Rs. 21,133 was sent to the Treasury by appellant Gupta, the Treasury figures in the challan showed that on that day only a sum of Rs. 1,133 was deposited into the Treasury and thus a sum of Rs.20,000 was dishonestly misappropriated.
• OF = [???? = showed, ???? = the rough cash book]
EF = [???? = sent, ??0 = by appellant Gupta, ??1 = a sum of Rs.21,133, ??2 = to the Treasury, ?????? - ?????? = on September 29,1950] ·
• OF = [???? = showed, ???? = the Treasury figures in the challan]
EF = [???? = deposited, ??0 = by appellant Gupta, ??1 = only a sum of Rs.1,133, ??2 = into the Treasury, ?????? = on that day] ·
• OF = [???? = showed, ???? = the Treasury figures in the challan]
EF = [???? = misappropriated, ??1 = a sum of Rs.20,000, ?????? = dishonestly]
[053] Information about named entities and their types present in various arguments of Observation or Evidence frame is important. Hence, the Observation Frame and Evidence Frame are also enriched by annotating entities such as PERSON, ORGANISATION, GEO-POLITICAL ENTITY, LOCATION, PRODUCT, EVENT, LANGUAGE, DATE, TIME, PERCENT, MONEY, QUANTITY, ORDINAL, CARDINAL, WEAPON, SUBSTANCE, DOCUMENT, ARTIFACT, WORK_OF_ART, WITNESS, BODY_PART, and VEHICLE present in the fields.
[054] Witness/Testimony Information Model also referred as testimony structure instances: Information in witness testimonies can also be represented using the same evidence structure. The statement verbs used in witness testimony sentences (e.g., stated, said) are treated like observation verbs and represented using ObservationFrames. Similarly, other action/event verbs mentioned in witness testimony sentences are represented using evidence frames. Table 4 shows examples of some witness/testimony sentences along with the corresponding evidence structure instances (e.g., also referred as testimony structure instances or witness testimony structure instances and interchangeably used herein).
Table 4
He has categorically stated that by reason of enmity, A1 and A2 together have murdered his brother-in-law.
• OF = [???? = stated, ??0 = He]
EF = [???? = murdered, ??0 = A1 and A2 together, ??1 = his brother-in-law, ?????? = by reason of enmity]
Shri Dholey (PW-6) reiterated about the dacoity and claimed that a pistol was brandished on him by one of the accused persons.
• OF = [???? = claimed, ??0 = Shri Dholey (PW-6)]
EF = [???? = brandished, ??0 = by one of the accused persons, ??1 = a pistol, ?????? = on him]
Though he stated in the post-mortem report that death would have occurred about 12 hours back, he clarified that there was possibility of injuries being received at about 9 A.M.
• OF = [???? = stated, ??0 = he, ???? = the post-mortem report]
EF = [???? = occurred, ??1 = death, ?????? = about 12 hours back]
• EF = [???? = clarified, ??0 = he, ??1 = that there was possibility of injuries being received at about 9 A.M. Deceased Sarit Khanna was aged about 27 years]
He admitted, however, that Shri Buch had met him in connection with the covenant, but he denied that he had received any letter Exhibit P-9 from Shri Buch or the lists Exhibits P- 10 to P- 12 regarding his private and State properties, were a part thereof.
• OF = [???? = admitted, ??0 = He]
EF = [???? = met, ??0 = Shri Buch, ??1 = him, ?????? = in connection with the covenant]
• OF = [???? = denied, ??0 = He]
EF = [???? = received, ??0 = he, ??1 = any letter Exhibit P-9, ??2 = from Shri Buch]
[055] Referring to steps 214 through 218, below described is generation of evidence structure instances/testimony structure instances using sentences identified as evidence or testimony is described herein. As mentioned above, Semantic Role Labelling as known in the art technique has been used by the system 100 to identify and fill the arguments of the Observation Frame and the Evidence Frame in the Evidence Structure Instance for every candidate sentence. This is demonstrated by way of an exemplary pseudo code/algorithm as depicted below. More specifically, below Algorithm 1 is depicted that illustrates generation of evidence structure instances.
Algorithm 1:
Input: s (sentence), SRLP (set of semantic frames in s as per any semantic role labeller, each frame P consists of a predicate P.V and corresponding arguments P.ARG0, P.ARG1, P.ARG2, P.ARGM-LOC, etc.)
Output: EvStructs = Evidence Structure Instances of the input sentence consisting of Observation Frame (OF) and Evidence Frame (EF)
Parameters: {OBS_VERBS = {accept, add, admit, agree, allege, allow, alter, apprise, assert, brief, build, challenge, claim, clarify, complain, confirm, corroborate, decline, demand, deny, depose, describe, disclose, dismiss, examine, exhibit, find, include, indicate, inform, mention, note, notice, observe, obtain, occur, point, prepare, present, receive, recover, refuse, reject, remember, report, reveal, say, show, state, submit, suggest, tell, withdraw}, NEG_WORDS = {no, not, neither, nor, never}}
EvStructs: = Ø
OFs: = Ø
// Obtain Observation Frames in the sentence s
foreach P ? SRLP such that P.V ? OBS_VERBS do
{
OF: = Create empty Observation Frame
OF.V: = P.V
OF.NEG: = P.ARGM-NEG
OF.A0: = P.ARG0
OF.A1: = P.ARG1
// If any of the arguments of the predicate starts with a negative word, then we negate the verb.
if (OF.A0 or OF.A1 starts with any word from NEG_WORDS) then
{
OF.NEG: = True
}
OF.EO: = get_evidence_object(P.ARG0) ? get_evidence_object(P.ARGM-LOC)
OFs: = OFs ? {OF}
}
// Obtain corresponding Evidence Frames for every Observation Frame
foreach OF ? OFs do
{
Found_EF: = False
foreach P ? SRL_P such that P.V occurs within the span of OF.A1 do
{
if (P.V is a copula verb and any of P.ARG0 or P.ARG1 does not exist) then
continue
EF: = Create empty Evidence Frame
EF.V: = P.V
EF.NEG: = P.ARGM-NEG
// if any of the arguments of the predicate starts with a negative word, then we negate the verb
if (OF.A0 or OF.A1 starts with any word from NEG_WORDS) then
{
EF.NEG: = True
}
foreach argument ARG ? P.arguments do
{
EF.ARG: = P.ARG
}
delete(OF.A1)
EvStruct: = {(OF, EF)}
EvStructs: = EvStructs ? EvStruct
Found_EF: = True
}
// If no Evidence Frame exists for an Observation Frame, transfer the Observation Frame to the Evidence Frame
if (Found_EF == False) then
{
EF: = Create empty Evidence Frame
EF.V: = OF.V
P: = P' ? SRLP such that P'.V = OF.V
foreach argument ARG ? P.arguments do
{
EF.ARG: = P.ARG
}
clear(OF)
OF.EO: = get_evidence_object(P.ARG0) ? get_evidence_object(P.ARGM-LOC)
// Add all the required arguments to Evidence Frame
EvStruct: = {(OF, EF)}
EvStructs: = EvStructs ? EvStruct
}
}
return (EvStructs)
[056] Referring to steps 214 through 218, the system 100 identifies Observation Frames using Observation Verbs. For each of these Observation Frames, corresponding Evidence Objects and Evidence Frames are identified. For identifying Evidence Objects, the system 100 implemented entity identification technique (e.g., WordNet based Entity Identification as known in the art, and the like) to identify the named entities in the sentence and these were further annotated in the Frames extracted. The Evidence Objects in a phrase are then obtained by selecting named entities annotated as one of the following types - ARTIFACT, VEHICLE, WEAPON, DOCUMENT, WORK_OF_ART, SUBSTANCE. This corresponds to the ??????_????????????????_???????????? function used in Algorithm 1. In other words, the one or more hardware processors 100 identify one or more evidence objects for each of the one or more observation frames. More specifically, the one or more evidence objects are identified by applying an entity identification technique (e.g., WordNet based Entity Identification – refer ‘G. A. Miller, Wordnet: a lexical database for English, Communications of the ACM 38 (1995) 39–41.’) on the first set of evidence sentences and the second set of evidence sentences to obtain one or more entities and annotating the one or more entities in the one or more evidence frames to obtain one or more annotated entities. The one or more annotated entities serve as the one or more evidence objects being identified. Further, Observation Frames that do not contain a corresponding Evidence Frame are redesigned as stand-alone Evidence Frames. In other words, one or more observation frames that do not contain a corresponding evidence frame are identified as a stand-alone evidence frame. Finally, the Evidence Frame and the Observation Frame are combined into an Evidence Structure Instance.
[057] The system 100 and the method associated herein measured the accuracy of 260 Evidence Structure Instances obtained from 100 random Evidence and Testimony sentences. The accuracy of the Observation Frame extraction is 86% and that of Evidence Frame extraction is 88%. It was observed that most of the incorrect extractions were due to parsing error in the SRL model.
[058] To demonstrate effectiveness of the evidence structure described herein, the method of the present disclosure has been applied for the task of prior case retrieval. This task is to create a relevance-based ranked list of court judgements (documents) in the corpus for a query. To achieve the above, say the system 100 receives/obtains an input query. The obtained input query is then represented as an evidence structure instance. A similarity score is computed for (i) the evidence structure instance against and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores. The similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances and the one or more testimony structure instances associated with each of the one or more court judgement documents. Based on the set of similarity scores, an intermediate similarity score is derived. Using the intermediate similarity score and a pre-defined sentence-based similarity score (e.g., also referred as sentence BERT score), a final similarity score is generated. Based on the final similarity score, one or more relevant court judgement documents are retrieved from the plurality of court judgement document.
[059] The above steps of prior case retrieval are better understood by way of following description. To retrieve prior cases for an input query, the input query is represented using an Evidence Structure Instance (??????????????????). Then the similarity of query instance ?????????????????? against each document instance ?????????????????? obtained from every Evidence or Testimony sentence in the corpus is computed. As mentioned above for computing the similarity score, the method uses cosine similarity between the phrase embeddings of corresponding arguments of the Evidence Structure Instances. For obtaining phrase embedding for any phrase (referred as ??h?????????????? in Algorithm 2), an average of GloVe word embeddings (e.g., refer ‘J. Pennington, R. Socher, C. D. Manning, Glove: Global vectors for word representation, in: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532–1543.’) of the words in that phrase excluding stop words. The similarity scores are computed within corresponding arguments of both the frames. These scores across different arguments are combined to get a similarity score between ?????????????????? and ??????????????????, wherein the similarity score is referred as an intermediate similarity score. The intermediate similarity score is then multiplied by a Sentence BERT based similarity score (e.g., refer ‘J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).’) between the query and the sentence containing ?????????????????? to obtain a final similarity score. This is necessary because errors in the automated SRL tool may lead to imperfect Evidence Structure instances in some cases. A sentence similarity score which is not dependent on any such structure within the sentences provides a complementary view of capturing sentence similarity. Finally, the overall relevance score of the query with a document is the maximum score corresponding to any Evidence Structure Instance ?????????????????? obtained from the document. Below provided is Algorithm 2 that illustrates computation of similarity score between ????????????????Q and ??????????????????.
Algorithm 2:
Input: EvStructQ: Evidence Structure Instance from a query sentence Q
EvStructD: Evidence Structure Instance from a sentence D in the corpus
Output: Similarity score between EvStructQ and EvStructD
// Checking for negation
if (EvStruct_Q.OF.NEG ? EvStruct_D.OF.NEG) then
{
return 0
}
if (EvStruct_Q.EF.NEG ? EvStruct_D.EF.NEG) then
{
return 0
}
// Computing similarity between main predicates, using cosine similarity of their word embeddings
simE: = CosineSim(WordVec(EvStruct_Q.EF.V), WordVec(EvStruct_D.EF.V))
// Computing similarity between corresponding Evidence Objects, using cosine similarity of their phrase embeddings
simEO: = CosineSim(PhraseVec(EvStruct_Q.OF.EO), PhraseVec(EvStruct_D.OF.EO))
// Computing similarity between other arguments, using cosine similarity of their phrase embeddings
numargs: = 0
simargs: = 0
foreach arg ? (EvStruct_Q.EF.arguments-{V}) do
{
if (EvStruct_Q.EF.arg exists) then
{
simargs: = simargs + CosineSim(PhraseVec(EvStructQ.EF.arg), PhraseVec(EvStructD.EF.arg))
numargs: = numargs + 1
}
}
simargs: = simargs/numargs
// Computing overall similarity
simfinal: = simE × simargs × simEO
// The overall similarity is multiplied by the Sentence-BERT based sentence similarity between Q and D
simfinal: = simfinal × CosineSim(SentVec(Q), SentVec(D))
return simfinal
[060] Table 5 shows an example of how a similarity score is computed between an Evidence Structure Instance (??????????????????) from a query and an Evidence Structure Instance (??????????????????) from a document in the corpus.
Table 5
Query: The autopsy report reveals that some poisonous compounds are found in the stomach of the deceased.
EvStructQ: OF = [???? = reveals, ???? = The autopsy report]; EF = [???? = found, ??1 = some poisonous compounds, ?????? = in the stomach of the deceased]
Sentence: The report of the Chemical Examiner showed that a heavy concentration of arsenic was found in the viscera. ??????????????????: OF = [???? = showed, ???? = The report of the Chemical Examiner]; EF = [???? = found, ??1 = a heavy concentration of arsenic, ?????? = in the viscera]
Similarity between main predicates, their arguments and evidence objects
simE: = CosineSim(WordVec(found), WordVec(found)) = 1.0
simA1: = CosineSim(PhraseVec(some poisonous compounds), PhraseVec(a heavy concentration of arsenic)) = 0.5469
simLOC: = CosineSim(PhraseVec(in the stomach of the deceased), PhraseVec(in the viscera)) = 0.3173
simargs: =(simA1 + simLOC)/2.0 = 0.4321
simEO: = CosineSim(PhraseVec(The autopsy report), PhraseVec(The report of the Chemical Examiner)) = 0.8641
Final similarity score
simfinal: = simE × simargs × simEO × simSBERT = 1.0 × 0.4321 × 0.8641 × 0.607 = 0.2266 (Ranked within top 10 relevant documents)
[061] As shown above the method describes a way for identifying/generating evidence structure instances, testimony structure instances. The method can be further implemented to generate legal event structure instances. For instance, a third set of linguistic rules are applied on one or more sentences (e.g., a third set of sentences) comprising one or more court judgements in the plurality of court judgement documents specific to the one or more prior court cases to identify a set of legal events sentences. The third set of linguistic rules may be applied on the same set of sentences (e.g., the first set of sentences) as that of the sentences identified during application of the first set of linguistic rules and/or the second set of linguistic rules, in one example embodiment. The third set of linguistic rules may be applied on at least a subset of sentences (e.g., the first set of sentences and/or the second set of sentences if different from each other) as that of the sentences identified during application of the first set of linguistic rules and/or the second set of linguistic rules, in another example embodiment. The sentences (e.g., the third set of sentences) identified for applying the third set of linguistic rules may be different from the set of sentences identified (e.g., the first set of sentences and the second set of sentences) during application of the first set of linguistic rules and/or the second set of linguistic rules, in yet another embodiment of the present disclosure. One or more evidence frames are identified from the set of legal events sentences. Each of the one or more evidence frames comprises at least one legal event. Further, the method includes generating one or more legal event structure instances based on the one or more evidence frames. Each legal event structure instance comprises at least one corresponding evidence frame. A legal event may refer to one or more legal actions that are performed by or involving the courts or law enforcement agencies. The above steps of generating legal event structure instances may be better understood by way of following description. Below are the exemplary set of linguistic rules (e.g., the third set of linguistic rules):
Sentence S is classified as a Legal event sentence if:
i) Sentence S should contain at least one legal action/event verb from a pre-defined set of verbs such as charged, arrested, filed, etc.
ii) The subject, object, or passive subject of the legal action/event verb should also contain some legal actor such as judge, court, accused, appellant, respondent, police, etc.
iii) And S should not be an evidence or a testimony sentence.
[062] By applying the above third set of linguistic rules on the sentences, the set of legal events sentences are identified. Examples of legal events sentences as identified by the system 100 upon applying the above rules, include, but are not limited to:
1. Prior to that the respondent was arrested in New Delhi by the Central Bureau of Investigation, Bank Securities and Fraud Cell, New Delhi in connection with CBI Case No. RC 4(E)/200 3-BS & F C CBI.
2. The High Court reversed the findings of the Commissioner and concluded that the vehicle was not insured with National Insurance Company Ltd. on the date of the accident and, therefore, the Insurance Company was absolved from its liability to compensate the workmen.
3. The trial Court acquitted the Appellant of the charge under Section 302 I.P.C, but convicted him of the charge under Section 304-B I.P.C.
[063] Upon identifying the above legal event sentences, one or more legal event structure instances are generated as shown below by way of examples:
Legal event structure instances:
Sentence: Prior to that the respondent was arrested in New Delhi by the Central Bureau of Investigation, Bank Securities and Fraud Cell, New Delhi in connection with CBI Case No. RC 4(E)/200 3-BS & F C CBI.
OF = []
EF = [EV = arrested, A_1 = the respondent, A_0 = by the Central Bureau of Investigation, Bank Securities and Fraud Cell, New Delhi, A_2 = in connection with CBI Case No. RC 4(E)/200 3-BS & F C CBI, LOC = in New Delhi]
[064] The system then retrieves a prior case for an input query as described above. For instance, an input query is obtained which is represented as ??????????????????. A similarity score is computed for (i) the evidence structure instance (??????????????????) against and (ii) the one or more evidence structure instances, the one or more testimony structure instances and the one or more legal event structure instances associated with each of the one or more court judgement documents to obtain a set of similarity scores. As mentioned above, the similarity score is computed based on one or more phrase embeddings of corresponding arguments comprised in (i) the evidence structure instance associated with the input query and (ii) the one or more evidence structure instances, the one or more testimony structure instances and the one or more legal event structure instances associated with each of the one or more court judgement documents comprised in the corpus stored in the database 108/memory 102. Based on the set of similarity scores, an intermediate similarity score is derived. Using the intermediate similarity score and a pre-defined sentence-based similarity score (e.g., also referred as sentence BERT score), a final similarity score is generated. Based on the final similarity score, one or more relevant court judgement documents are retrieved from the plurality of court judgement document. For the sake of brevity, algorithm for prior case retrieval using legal event structure instance is not shown. However, it is to be understood by a person having ordinary skill in the art or person skilled in the art that such algorithm can be realized, and examples of such algorithms as described herein shall not be construed as limiting the scope of the present disclosure.
Experimental evaluation:
[065] Present disclosure and its system and method discuss experiments including the dataset, baseline techniques, evaluation metrics and analysis of results as below:
Dataset:
[066] The system 100 used the Indian Supreme Court judgements from years 1952 to 2012 available at http://liiofindia.org/in/cases/cen/INSC/. There are 30032 court (documents) containing 4,111,091 sentences where average sentence length is 31 words and standard deviation of 24.
Baselines:
[067] For the task of prior case retrieval, the system and method described herein implemented two baseline techniques:
1. BM25: It is a popular TF-IDF based relevance computation technique. The system 100 uses the BM25+ variant as described in Trotman et al. (e.g., refer ‘A. Trotman, A. Puurula, B. Burgess, Improvements to bm25 and language models examined, in: Proceedings of the 2014 Australasian Document Computing Symposium, 2014, pp. 58–65.’). This technique uses a bag-of-words approach and hence word order and sentence structure are ignored. The system 100 used 4 different settings for this baseline:
a. ????25??????: All sentences in each document are considered.
b. ????25????: Only those sentences in each document which are identified as Testimony or Evidence are considered.
c. ????25??: Only Testimony sentences in each document are considered.
d. ????25??: Only Evidence sentences in each document are considered.
2. Sentence-BERT: It is a technique based on Siamese-BERT networks to obtain more meaningful sentence embeddings. The system 100 used the pre-trained model bert-base-nli-stsb-mean-tokens to obtain sentence embeddings for sentences in both query and documents. Following Ghosh et al., the system 100 used the pre-trained model as it is and did not finetune it further. This is because such fine-tuning needs annotated sentence pairs with labels indicating whether the sentences in the pair are semantically similar or not. Such annotated dataset is expensive to create and aim of the present disclosure/application is to avoid any dependence on manually annotated training data. Like Ghosh et al., the system 100 used sentence embeddings obtained by Sentence-BERT to compute cosine similarity between a query sentence and a sentence in a document. Overall similarity of a document with a query is the maximum cosine similarity obtained for any of its sentences with the query sentence. Following different settings were used for this baseline:
1. ????????: Only those sentences in each document which are identified as Testimony or Evidence are considered.
2. ??????: Only Testimony sentences in each document are considered.
3. ??????: Only Evidence sentences in each document are considered.
Evaluation:
[068] All the baseline techniques and the proposed technique have been evaluated using a set of queries and using certain evaluation metrics to evaluate and compare the ranked lists produced by each of these techniques.
[069] Queries: The system 100 chose 10 different queries (shown in Table 6) which are diverse in nature in terms of type of case (domestic violence, financial fraud etc.) and the evidence object in focus. Ground Truth: In order to create a set of gold-standard relevant documents for each query, the present disclosure and its system and method employed the standard pooling technique (e.g., refer ‘C. Manning, P. Raghavan, H. Schutze, Introduction to information retrieval, Natural Language Engineering 16 (2010) 100–103.’ – also referred as Manning et al.) to collate a set of candidate documents for manual verification. The following techniques were run to produce a ranked list of documents for each query – ????25??????, ????25????, ????????, and the method of the present disclosure ??????????????h????. The present disclosure and its system and method chose top 10 documents from the ranked list produced by each technique and then human experts verified whether each of those documents is relevant for the query or not. Finally, after discarding all the irrelevant documents, a set of gold-standard relevant documents were obtained for each query.
[070] Metrics: The system 100 used R-Precision and Average Precision as two evaluation metrics (e.g., refer Manning et al.).
1. R-Precision (R-Prec): This calculates the precision observed at ??, i.e., the number of relevant documents.
2. Average Precision (AP): This captures the joint effect of Precision and Recall. It computes precision at each rank of the predicted ranked list and then computes mean of these precision values.
[071] More specifically, Table 6 depicts evaluation of various techniques for the task of prior case retrieval. All entries are of the form (R-Prec; Avg. Precision). (Note: method of the present disclosure is referred as Semantic matching (or ??????????????h / ????). Values in bold indicate the best performing results for each query across multiple techniques).
Table 6
What are the cases where:
??1: blood stains were found on clothes of the deceased.
??2: the deceased had attacked some person with sticks.
??3: the police have murdered the deceased.
??4: some evidence shows that the exhibited gun was not used.
??5: the autopsy report reveals that some poisonous compounds are found in the stomach of the deceased.
??6: the deceased is attacked with a knife.
??7: a letter by the deceased reveal that dowry was demanded.
??8: a cheque was dishonoured due to insufficient funds.
??9: bribe was demanded by police.
??10: a signature was forged on an affidavit.
Query BM25all BM25T
BM25E
BM25TE SBT SBE SBTE SMT SME SMTE
Q1 0.24; 0.26 0.06; 0.02 0.59; 0.49 0.59; 0.52 0.00; 0.01 0.24; 0.15 0.18; 0.14 0.00; 0.01 0.24; 0.16 0.24; 0.14
Q2 0.25; 0.43 0.00; 0.05 0.00; 0.04 0.00; 0.06 0.00; 0.01 0.00; 0.00 0.00; 0.00 0.25; 0.14 0.25; 0.25 0.50; 0.30
Q3 0.00; 0.01 0.00; 0.03 0.33; 0.33 0.33; 0.35 0.33; 0.12 0.00; 0.00 0.00; 0.09 0.33; 0.12 0.00; 0.00 0.33; 0.12
Q4 0.17; 0.06 0.00; 0.01 0.00; 0.02 0.00; 0.04 0.00; 0.01 0.42; 0.25 0.42; 0.22 0.08; 0.04 0.25; 0.27 0.33; 0.29
Q5 0.30; 0.43 0.10; 0.05 0.40; 0.35 0.40; 0.37 0.20; 0.15 0.70; 0.80 0.70; 0.80 0.00; 0.02 0.40; 0.40 0.40; 0.40
Q6 0.31; 0.42 0.33; 0.28 0.38; 0.35 0.46; 0.52 0.23; 0.14 0.33; 0.38 0.36; 0.40 0.20; 0.18 0.28; 0.27 0.41; 0.42
Q7 0.25; 0.35 0.00; 0.08 0.50; 0.54 0.50; 0.33 0.00; 0.04 0.00; 0.12 0.00; 0.09 0.25; 0.06 0.00; 0.00 0.25; 0.06
Q8 0.48; 0.46 0.01; 0.09 0.67; 0.71 0.71; 0.73 0.05; 0.02 0.62; 0.67 0.62; 0.67 0.00; 0.00 0.57; 0.63 0.57; 0.64
Q9 0.20; 0.23 0.20; 0.17 0.20; 0.21 0.40; 0.31 0.40; 0.39 0.20; 0.21 0.50; 0.51 0.40; 0.41 0.10; 0.12 0.50; 0.48
Q10 0.50; 0.52 0.00; 0.11 0.25; 0.16 0.25; 0.21 0.00; 0.01 0.00; 0.04 0.00; 0.03 0.25; 0.13 0.50; 0.61 0.50; 0.61
Avg 0.27; 0.32 0.08; 0.09 0.33; 0.32 0.36; 0.34 0.12; 0.09 0.25; 0.26 0.28; 0.30 0.18; 0.11 0.26; 0.27 0.40; 0.35
[072] Table 6 shows comparative evaluation results for various baselines and the method of the present disclosure. Average performance of ????25???? is better than ????25?????? indicating that considering only Evidence and Testimony sentences for representing any document, results in better prior case retrieval performance. Other two baselines ???? (Sentence-BERT) and ???? (the method of the present disclosure - ??????????????h) also consider only Evidence and Testimony sentences rather than considering all the sentences in a document. All the baselines which consider only Testimony sentences, perform poorly as compared to the corresponding techniques using both Testimony and Evidence sentences. This highlights the importance of evidence information as compared to using only witness testimony information for prior case retrieval as done conventionally, in Ghosh et al.
[073] Considering the average performance across all the 10 queries, the method of the present disclosure ?????? ?? is the best performing technique in terms of both R-Prec and AP. The performance of SMTE is the most consistent across all these diverse queries, in the sense that it achieves minimum R-Prec of 0.24 (for ??1) as compared to other baselines like ????25??????, ????25???? and ???????? which have minimum RPrec of 0 for some queries. As described in Algorithm 2, ???? makes use of Sentence-BERT based similarity within sentences for producing a better matching score. The system 100 of the present disclosure experimented with a variant of ???? which does not rely on Sentence-BERT based similarity. This variant resulted in average R-Prec of 0.36 and MAP of 0.30 across all the 10 queries. Although this is lower than ???????? performance, the R-Prec is still comparable with ????25?? ?? (avg R-Prec of 0.36) and better than that of ???????? (avg R-Prec of 0.28).
[074] For some queries, it is important to have some semantic understanding at sentence-level. Especially, for the query ??4, which contains “negation”, ???? and ???? are able to capture the query’s meaning in a better way. ???? handles such negations in a more principled manner as the Evidence Structure Instance captures negation as one of its arguments.
[075] For ????, the maximum matching score achieved for any Evidence Structure Instance in a document, is considered as the overall matching score with the whole document. In contrast, ????25 based techniques directly compute matching score for the whole document as they do not rely on sentence structure. However, as ???? computes matching scores for individual Evidence Structure instances, it is able to provide better interpretation for each relevant document in terms of the actual sentences which provided the maximum matching score.
[076] Analysis of errors: The method and system described herein analyzed a few cases where ???????? was not able to assign a high score to a relevant document or assigned a high score to a non-relevant document. It was observed that there are 3 main reasons - missing or incorrect arguments within Evidence Structure instances, misleading high similarity between argument phrases and presence of co-references.
[077] Consider the following sentence for which ???????? incorrectly assigns a high matching score for query ??5 (see Table 6) – The police report also reveals that three pieces of pellets were found by the doctor in the body of deceased Monu. Here, except the ??1 argument (some poisonous compounds vs three pieces of pellets) in Evidence Structure instances, other arguments are similar in meaning. Cosine similarity of 0.36 was obtained between poisonous compounds and three pieces of pellets which is misleading. Because it is not too low as compared to another case where there are semantically similar argument phrases (e.g., cosine similarity between some poisonous compounds and heavy concentration of arsenic is just 0.55 as shown in Table 5). As co-references are not resolved, a few relevant documents may be missed. e.g., ???????? does not assign a high score for the following document for query ??3 (see Table 6) – Instead of surrendering before the police, the deceased had attempted to kill the police. In retaliation, he was shot by them. This is because them in the Evidence Structure instance for shot is not explicitly known to correspond to the police in the previous sentence.
[078] Embodiments of the present disclosure provide system and method for generating legal structure instances for prior court case retrieval. More specifically, the method of the present disclosure (or present application) identifies evidence/testimony/non-evidence/non-testimony sentences, representing them in the semantically rich legal structures (e.g., evidence structure instances, testimony structure instances, legal event structure instances) and retrieving relevant prior cases exploiting it. The method described herein implemented weakly supervised classifier as they do not rely on any manually annotated training data, except for the reliance on human expertise in designing the linguistic rules. Keeping in mind the importance of witness testimonies/legal event(s) in addition to evidence(s) the system and method also extracted and represented the information about witness testimonies and legal events using the same Evidence Structure approach. For the application of prior case retrieval, method of the present disclosure was evaluated along with several competent baselines, on a dataset of 10 diverse queries and it was demonstrated through experiments as described above that the method of the present disclosure outperforms the baselines. The results highlight the importance of evidence and testimony information and its contribution in improving prior case retrieval performance. It is to be understood by a person having ordinary skill in the art or person skilled in the art that though the experiments involve results related to evidence structure instances and testimony structure instances, legal event structure instances can also be used for prior case retrieval thereby enhancing/improving the retrieval accuracy.
[079] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[080] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g., any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g., hardware means like e.g., an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g., an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g., using a plurality of CPUs.
[081] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[082] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[083] Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
[084] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 202121026944-STATEMENT OF UNDERTAKING (FORM 3) [16-06-2021(online)].pdf | 2021-06-16 |
| 2 | 202121026944-REQUEST FOR EXAMINATION (FORM-18) [16-06-2021(online)].pdf | 2021-06-16 |
| 3 | 202121026944-FORM 18 [16-06-2021(online)].pdf | 2021-06-16 |
| 4 | 202121026944-FORM 1 [16-06-2021(online)].pdf | 2021-06-16 |
| 5 | 202121026944-FIGURE OF ABSTRACT [16-06-2021(online)].jpg | 2021-06-16 |
| 6 | 202121026944-DRAWINGS [16-06-2021(online)].pdf | 2021-06-16 |
| 7 | 202121026944-DECLARATION OF INVENTORSHIP (FORM 5) [16-06-2021(online)].pdf | 2021-06-16 |
| 8 | 202121026944-COMPLETE SPECIFICATION [16-06-2021(online)].pdf | 2021-06-16 |
| 9 | 202121026944-Proof of Right [29-06-2021(online)].pdf | 2021-06-29 |
| 10 | 202121026944-FORM-26 [13-10-2021(online)].pdf | 2021-10-13 |
| 11 | Abstract1..jpg | 2021-12-01 |
| 12 | 202121026944-FER.pdf | 2023-02-07 |
| 13 | 202121026944-FER_SER_REPLY [13-06-2023(online)].pdf | 2023-06-13 |
| 14 | 202121026944-COMPLETE SPECIFICATION [13-06-2023(online)].pdf | 2023-06-13 |
| 15 | 202121026944-CLAIMS [13-06-2023(online)].pdf | 2023-06-13 |
| 1 | search_strategy_0602E_06-02-2023.pdf |