Sign In to Follow Application
View All Documents & Correspondence

System And Method For Sequence Labeling Using Hierarchical Capsule Based Neural Network

Abstract: This disclosure relates generally to sequence labeling and more particularly to method and system for sequence labeling. The method includes employing a hierarchical capsule based neural network for sequence labeling, which includes a sentence encoding layer (having word embedding layer, feature extraction layer and multiple capsule layer) and a document encoding layer, Bi-LSTMs, a fully 10 connected layer and a conditional random fields (CRF) layer. The word embedding Layer obtains fixed-size vector representation of words of sentences associated with a dialogue or an abstract, then the feature extraction layer encodes the sentences, the Capsule layer extracts high-level features from the sentence. All the sentence encodings are then stacked up together and are passed through 15 another Bi-LSTM layer to derive the contextual information from sentences. A fully connected layer calculates likelihood scores. The CRF layer obtains optimized label sequence based on the likelihood scores. [To be published with FIGS. 2A and 2B]

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
28 June 2019
Publication Number
01/2021
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
kcopatents@khaitanco.com
Parent Application
Patent Number
Legal Status
Grant Date
2024-05-13
Renewal Date

Applicants

Tata Consultancy Services Limited
Nirmal Building, 9th Floor, Nariman Point Mumbai - 400021 Maharashtra, India

Inventors

1. AGARWAL, Puneet
Tata Consultancy Services Limited Plot no. A-44 & A45, Ground , 1st to 05th floor & 10th floor Block C&D, Sector 62, Noida - 201309 Uttar Pradesh, India
2. SRIVASTAVA, Saurabh
Tata Consultancy Services Limited Plot no. A-44 & A45, Ground , 1st to 05th floor & 10th floor Block C&D, Sector 62, Noida - 201309 Uttar Pradesh, India
3. VIG, Lovekesh
Tata Consultancy Services Limited Plot no. A-44 & A45, Ground , 1st to 05th floor & 10th floor Block C&D, Sector 62, Noida - 201309 Uttar Pradesh, India
4. SHROFF, Gautam
Tata Consultancy Services Limited Plot no. A-44 & A45, Ground , 1st to 05th floor & 10th floor Block C&D, Sector 62, Noida - 201309 Uttar Pradesh, India

Specification

FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention
SYSTEM AND METHOD FOR SEQUENCE LABELING USING HIERARCHICAL CAPSULE BASED NEURAL NETWORK
Applicant
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th floor,
Nariman point, Mumbai 400021,
Maharashtra, India
Preamble to the description
The following specification particularly describes the invention and the manner in which it is to be performed.2

TECHNICAL FIELD
[001] The disclosure herein generally relates to sequence labeling, and, more particularly, to system and method for sequence labeling using hierarchical capsule based neural networks.

BACKGROUND
[002] In Natural Language Processing (NLP) maintaining a memory of history plays an important role in many tasks. While reading a book, for example, certain summaries or short stories that represent the key aspects of the book are referred to as ‘context’ in NLP. 10
[003] In many of NLP areas including but not limited to, Dialogue Systems, Scientific Abstract Classification, Part-of-Speech tagging, the treatment of context plays an important role for text classification. The context is sometimes, arranged hierarchically, i.e., something from each of the previous statements needs to be remembered. Also, sometimes the context is present 15 sporadically in some of the previous sentences. The presence of such context makes the task of sentence classification even more challenging.

SUMMARY
[004] Embodiments of the present disclosure present technological 20 improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a method for sequence labeling is provided. The method includes employing, via one more hardware processors, a hierarchical capsule based neural network for sequence labeling. The hierarchical capsule based neural 25 network includes a sentence encoding layer, a document encoding layer, a fully connected layer and a conditional random fields (CRF) layer, the sentence encoding layer comprising a word embedding layer, a feature extraction layer composed of a first plurality of Bi-LSTMs, a primary capsule layer, and convolutional capsule layers, and the document encoding layer comprising a 30 second plurality of Bi-LSTMs. Employing the hierarchical capsule based neural
network for sequence labeling includes determining, by the word embedding layer, initial sentence representations for a plurality of sentences associated with a task. Each of the sentence representations includes a concatenated embedding vector. The concatenated embedding vector includes a fixed-length vector corresponding to each word of the sentence. A fixed-length vector corresponding 5 to a sentence is representative of lexical-semantics of words of the sentence. The feature extraction layer encodes contextual semantics between words within a sentence of the plurality of sentences using the concatenated embedding vector associated with each sentence of the plurality of sentences to obtain a plurality of context vectors. The method further includes convolving with the plurality of 10 context vectors (Ci) while skipping one or more context vectors in between, by the primary capsule layer comprising a filter, to obtain a capsule map comprising a plurality of contextual capsules associated with the plurality of sentences. The plurality of contextual capsules are connected in multiple levels using shared transformation matrices and a routing model. A number of the one or more 15 vectors is dilation rate (dr). A final sentence representation is computed for the plurality of sentences. Computing the final sentence representation includes determining coupling strength between child-parent pair contextual capsules. Contextual information is obtaining between the plurality of sentences by a second plurality of Bi-LSTM. The contextual information is obtained using the 20 final sentence representations associated with the plurality of sentences. The second plurality of Bi-LSTMs takes sentences at multiple time steps and produces a sequence of hidden state vectors corresponding to each of the plurality of sentences. The hidden state vectors are passed through the feed forward layer to output likelihood scores for possible labels for each statement of the plurality of 25 statements. Optimized label sequence are obtained for the plurality of sentences by the CRF layer based at least on a sum of possible labels weighted by the likelihood scores.
[005] In another aspect, a system for sequence labeling is provided. The system includes one or more memories; and one or more hardware processors, the 30 one or more memories coupled to the one or more hardware processors, wherein
the one or more hardware processors are configured to execute programmed instructions stored in the one or more memories to employ a hierarchical capsule based neural network for sequence labeling. The hierarchical capsule based neural network includes a sentence encoding layer, a document encoding layer, a fully connected layer and a conditional random fields (CRF) layer, the sentence 5 encoding layer comprising a word embedding layer, a feature extraction layer composed of a first plurality of Bi-LSTMs, a primary capsule layer, and convolutional capsule layers, and the document encoding layer comprising a second plurality of Bi-LSTMs. Employing the hierarchical capsule based neural network for sequence labeling includes determining, by the word embedding 10 layer, initial sentence representations for a plurality of sentences associated with a task. Each of the sentence representations includes a concatenated embedding vector. The concatenated embedding vector includes a fixed-length vector corresponding to each word of the sentence. A fixed-length vector corresponding to a sentence is representative of lexical-semantics of words of the sentence. The 15 feature extraction layer encodes contextual semantics between words within a sentence of the plurality of sentences using the concatenated embedding vector associated with each sentence of the plurality of sentences to obtain a plurality of context vectors. The method further includes convolving with the plurality of context vectors (Ci) while skipping one or more context vectors in between, by 20 the primary capsule layer comprising a filter, to obtain a capsule map comprising a plurality of contextual capsules associated with the plurality of sentences. The plurality of contextual capsules are connected in multiple levels using shared transformation matrices and a routing model. A number of the one or more vectors is dilation rate (dr). A final sentence representation is computed for the 25 plurality of sentences. Computing the final sentence representation includes determining coupling strength between child-parent pair contextual capsules. Contextual information is obtaining between the plurality of sentences by a second plurality of Bi-LSTM. The contextual information is obtained using the final sentence representations associated with the plurality of sentences. The 30 second plurality of Bi-LSTMs takes sentences at multiple time steps and produces
5

a sequence of hidden state vectors corresponding to each of the plurality of sentences. The hidden state vectors are passed through the feed forward layer to output likelihood scores for possible labels for each statement of the plurality of statements. Optimized label sequence are obtained for the plurality of sentences by the CRF layer based at least on a sum of possible labels weighted by the 5 likelihood scores.
[006] In yet another aspect, a non-transitory computer readable medium for a method of sequence labeling is provided. The method includes employing, via one more hardware processors, a hierarchical capsule based neural network for sequence labeling. The hierarchical capsule based neural network includes a 10 sentence encoding layer, a document encoding layer, a fully connected layer and a conditional random fields (CRF) layer, the sentence encoding layer comprising a word embedding layer, a feature extraction layer composed of a first plurality of Bi-LSTMs, a primary capsule layer, and convolutional capsule layers, and the document encoding layer comprising a second plurality of Bi-LSTMs. 15 Employing the hierarchical capsule based neural network for sequence labeling includes determining, by the word embedding layer, initial sentence representations for a plurality of sentences associated with a task. Each of the sentence representations includes a concatenated embedding vector. The concatenated embedding vector includes a fixed-length vector corresponding to 20 each word of the sentence. A fixed-length vector corresponding to a sentence is representative of lexical-semantics of words of the sentence. The feature extraction layer encodes contextual semantics between words within a sentence of the plurality of sentences using the concatenated embedding vector associated with each sentence of the plurality of sentences to obtain a plurality of context 25 vectors. The method further includes convolving with plurality of context vectors (Ci) while skipping one or more context vectors in between, by the primary capsule layer comprising a filter, to obtain a capsule map comprising a plurality of contextual capsules associated with the plurality of sentences. The plurality of contextual capsules are connected in multiple levels using shared transformation 30 matrices and a routing model. A number of the one or more vectors is dilation rate
6

(dr). A final sentence representation is computed for the plurality of sentences. Computing the final sentence representation includes determining coupling strength between child-parent pair contextual capsules. Contextual information is obtaining between the plurality of sentences by a second plurality of Bi-LSTM. The contextual information is obtained using the final sentence representations 5 associated with the plurality of sentences. The second plurality of Bi-LSTMs takes sentences at multiple time steps and produces a sequence of hidden state vectors corresponding to each of the plurality of sentences. The hidden state vectors are passed through the feed forward layer to output likelihood scores for possible labels for each statement of the plurality of statements. Optimized label 10 sequence are obtained for the plurality of sentences by the CRF layer based at least on a sum of possible labels weighted by the likelihood scores.
[007] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. 15

BRIEF DESCRIPTION OF THE DRAWINGS
[008] The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles: 20
[009] FIG. 1 illustrates an exemplary network environment for implementation of a system for sequence labeling using hierarchical capsule based neural network according to some embodiments of the present disclosure.
[010] FIGS. 2A and 2B illustrates a flow diagram for a method for sequence labeling using hierarchical capsule based Neural network, in accordance 25 with an example embodiment of the present disclosure.
[011] FIG. 3 illustrates an example block diagram of a hierarchical capsule based neural network for sequence labeling, in accordance with an example embodiment of the present disclosure.
7

[012] FIG. 4 illustrates a sentence encoder of the hierarchical capsule based NN of FIG. 3, in accordance with an example embodiment of the present disclosure.
[013] FIG. 5 illustrates a capsule layer of the sentence encoder of the hierarchical capsule based neural network of FIG. 3, in accordance with an 5 example embodiment of the present disclosure.
[014] FIG. 6 illustrates a document encoder and a CRF layer of the hierarchical capsule based neural network of FIG. 3, in accordance with an example embodiment of the present disclosure.
[015] FIG. 7 illustrates a block diagram of an exemplary computer 10 system for implementing embodiments consistent with the present disclosure.

DETAILED DESCRIPTION OF EMBODIMENTS
[016] Sequence Labeling is one of the most prominent tasks in NLP. Sequence labeling refers to assigning sequence of labels to sequences of objects, 15 for example, in NLP. For instance, in NLP sequence labeling may have applications in speech recognition, POS tagging, name-entity recognition, and so on.
[017] In Natural Language Processing (NLP) maintaining a memory of history plays an important role in many tasks. For example, while reading a long 20 book the reader may remember key aspects of sentences and at the end of the paragraph, can create a short summary of what is read. The reader may mentally collect these ‘short stories’ or ‘summaries’ while moving forward in the book and finally amalgamate these summaries to create a long sustaining memory of the book which typically lacks in verbiage (words or phrases that don’t play crucial 25 role in remembering key aspects) and focuses mainly on critical points of the book. These ‘short stories’ or ‘summaries’ are referred to as ‘context’ in NLP.
[018] Sequence Labeling is one of the important tasks in text classification problems and requires proper treatment of the context, making this task different from general text classification. The traditional text classification 30 models do not carry context from one sentence to another and hence may not
8

perform well on tasks associated with NLP, such as Dialogue Systems, Scientific Abstract Classification, Part-of-Speech tagging, and so on. These traditional models lack a hierarchical structure that can aid them in dissecting the input structure at different levels to allow flow of context between sentences. The aforementioned tasks tend to be complex because of improper treatment of 5 context. The context is sometimes, arranged hierarchically, meaning thereby that the model may need to remember something from each of the previous statements. Also, sometimes the context is present sporadically in some of the previous sentences. The presence of such context makes the task of sentence classification even more challenging. 10
[019] Prominent examples of sequence labeling tasks include dialogue act classification and scientific abstract classification. These tasks are described in more detail in the description below. It will be noted herein that even though some of the embodiments are described with reference to the tasks of the dialogue act classification and scientific abstract classification, the embodiments are equally 15 applicable for various other NLP based tasks.
[020] Dialogue act classification: The study of utterances in dialogues is an intriguing challenge in the field of computational linguistics and has been studied with a variety of perspectives like linguistic and psychology, which has been extended to computational strategies. Study of these acts can helps in 20 understanding the discourse structure which in turn can be used to enhance the capabilities of conversational systems. Still, there is an absence of a definite model for understanding the discourse structure which typically consists of unconstrained interactions between humans. Dialogue Acts (DAs) however, are one of the traits that can be used to understand these complex structures. Dialogue 25 Acts have proven their usefulness in many NLP problems like Spoken Language Understanding (SLU). For example, DAs can be domain-dependent intents, such as “Find books”, or “Show flights” in “Library” and “Flight” domain. DAs are also used in many Machine Translation tasks where the goal of the Automatic Speech Recognition (ASR) tool is to understand the utterance and respond 30 accordingly, DAs are used in this practice to increase the word recognition
9

accuracy. DAs are also widely used to make the model of a talking head animated by making facial expressions resembling human beings, e.g., if a surprising statement has been made to the model, it can imitate a human making a bewildering expression.
[021] Scientific Abstract Classification: There have been a plethora of 5 scientific articles published every year, as evidenced by which mentions that more than 50 million scientific articles have been published till now, with more and more of these articles coming out each year. With such a large amount of scientific articles being published every year, the process of categorizing them properly or searching a relevant text of information has become an arduous task. 10 Thus, to have an organized system of these articles can facilitate the process of searching. To create such systems an intelligent tool is needed that can facilitate in categorization of these scientific articles into meaningful classes based on the information within them. Ideally, someone seeking relevant information looks up into the abstract of the scientific articles and categorizes its sentences into one of 15 the different categories like objective, solution, results, and conclusion. However, these categories are not stated explicitly in these articles and hence, one can find it difficult to comprehend them. One of the major challenges with these scientific articles arises due to adherence of its writer to a variety of artistic skills which makes them unstructured and hard to extract out the essential elements. For e.g., 20 one may think it’s good to introduce background knowledge before providing the result of the work and on the other hand, one can describe their objective before providing the results. Hence, it would be beneficial to develop an intelligent tool that can extract these elements categorically thereby, saving both time and human effort. 25
[022] In both of the tasks defined above, namely dialogue act classification and scientific abstract classification, a proper understanding of context to categorize a sentence is required. This makes the task of sequence labeling, different from traditional text classification. Amalgamation of important contextual and current information to classify the sentence is at the center of 30 Sequence Labeling Task and has been solved traditionally with the help of
10

Conditional Random Fields (CRFs), Hidden Markov Models (HMMs), and so on. CRFs have been proved to perform well in many NLP tasks like POS tagging, Named Entity Recognition, and so on, each of which requires contextual information and past history of sequences to categorize the current input.
[023] Initially, sequence labeling problems were tackled with one of pre-5 eminent models in Machine Learning, i.e., Hidden Markov Models (HMMs). From their advent HMMs have been used in many of the sequence labeling problems and has been used widely for many of the text processing problems. In another unprecedented model which can be seen as an extension of HMM, CRF was proposed. CRFs have been used in many challenging text processing 10 problems and were dominant in most of them. One shortcoming of such approaches was to manually provide the features to them, which is a time-consuming process.
[024] Later, deep Learning based modules were introduced that focused on using RNNs, CNNs, and sometimes a hybrid model combining both of them. 15 These modules are equipped with the capability to automatically capture the relevant features from the input and don’t completely rely on human intervention. CNNs are known to perform well in short texts and extract the N-gram phrases but, however, they pose a difficult task of choosing an optimal window size. Further, dilated CNNs were introduced. To eschew the process of selecting an 20 optimal window size, RNNs can be used which uses its recurrent structure to capture long term dependencies. RNNs also put forward some difficulties, as they are said to focus on the current input and its neighboring words at any time step which consequently results in a summary biased towards the extreme ends of the sentence. However, on reaching the extremes there can be a possibility for them to 25 forget the necessary information which can appear in between the document.
[025] Hybrid based approach combining both CNNs and RNNs have been proposed in to overcome the shortcoming of each other. Capsule Networks have shown to perform well on some text classification tasks. Nonetheless, these models only consider current input for classification and hence lack the context 30
11

semantics generated by the neighboring sentences, which is beneficial in sequence labeling task.
[026] To optimize the sequence labeling operations, many techniques combining Deep Learning approaches with CRFs have shown to outperform many state-of art approaches. A combination of Bi-LSTM and CRF are supposed to 5 incorporate the contextual information obtained from RNNs with inter-dependence between subsequent labels improving the labeling process. In a conventional system, Multi-Hop attention layer is used with CRFs to combine sentence level contextual information and interdependency between the layers to strengthen the training process. Said system also removed the character-based 10 word embedding with attention-based pooling on both RNNs and CNNs.
[027] Various embodiments disclosed herein provide system and method for sequence labeling using a hierarchical capsule based Neural Network. For example in one embodiment, the method includes obtaining a sentence representation of sentences associated with a task by encoding input words of the 15 sentence into a fixed length vector. After obtaining word representations, said word representations are concatenated into one single vector of fixed length, this fixed length vector is then passed, first to a feature extraction layer and then to a layer of capsules to further squeeze out the essential word level features. Similarly, sentence representation for all the sentences in an abstract or dialogue 20 associated with the task are obtained, stacked up together, and are then passed to a bidirectional long short term memory (Bi-LSTM) layer to get document representation (whole dialogue and/or abstract representation) enriched with contextual information. Representation obtained at each time-step is used for calculating the likelihood of each label. Finally, with the help of CRF layer, an 25 optimal label sequence, i.e., a label for every sentence of the document is obtained, by remembering the label history.
[028] An important technical contribution of the disclosed embodiment is a hierarchical neural network architecture which obtains the sentence representation using capsules. . It will be understood that obtaining an 30 intermediate representation in an NLP sequence labeling task using capsules has
12

the technical advantage of reduction of model training time, number of parameters and complexity as compared to conventional models such as Attention, transformers based models, and so on.
[029] RNNs are known to extract the contextual information by focusing only on the neighboring words. As a consequence, the final hidden state 5 representation (which is normally used in text classification problem), may have contextual information which is biased toward extremes of the sentence. For calculating a representation vector against each sentence, the disclosed system convolves hidden activations of RNN units (summaries of the text) which are separated by a fixed distance referred to as ‘dilation-rate’. This method allows the 10 disclosed system to focus not only on the neighboring vectors but also on the hidden state vectors that are scattered in the sentence.
[030] Conventionally, CNNs, despite their capability to extract Nword-phrase, have a problem of selecting an optimal window size. The short window size may result in lossy information compression while an increase in window 15 size may lead to an increase in the number of parameters which leads to increased burden in the training process. Various embodiments disclose a method of first extracting the smoothened contextual information as low-level features by using the Bi-LSTM layer instead of CNNs. These smoothened low-level features are then passed through subsequent layers. It will be understood that using the Bi-20 LSTMs to capture low-level features in the sentence allows collection of information that could be used to infer more complex ones. These are other features of disclosed embodiments are described further with reference to FIGS. 1- 7 below.
[031] Exemplary embodiments are described with reference to the 25 accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other 30 implementations are possible without departing from the scope of the disclosed
13

embodiments. It is intended that the following detailed description be considered as exemplary only, with the true scope being indicated by the following claims.
[032] Referring now to the drawings, and more particularly to FIG. 1 through 7, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and 5 these embodiments are described in the context of the following exemplary system and/or method.
[033] FIG. 1 illustrates an example network implementation 100 of a system 102 for sequence labeling using hierarchical capsule based neural network, in accordance with an example embodiment. The hierarchical structure of the 10 disclosed hierarchical capsule based neural network aid in dissecting an input structure of data (for example, sentences, abstracts, paragraphs, and so on) at different levels to allow flow of context between sentences in the data. In various embodiments disclosed herein, the hierarchical neural network includes Bi-LSTMs, Dilated Convolution operation, Capsules and Conditional Random Field 15 (CRF) to understand the discourse and/or abstract structure of the data and predict next probable label by using label history.
[034] In an embodiment, the system 102 employs the hierarchical capsule based neural network for the purpose of sequence labeling. In an embodiment, the hierarchical capsule based neural network includes a sentence 20 encoding layer, a document encoding layer, a fully connected layer and a conditional random fields (CRF) layer. The sentence encoding layer includes a word embedding layer, a feature extraction layer including a first plurality of Bi-LSTM layers, a primary capsule layer, and convolutional capsule layers. The document encoding layer includes a second plurality of Bi-LSTM layers. 25
[035] The word embedding layer obtains a fixed-size vector representation of each word of a sentence. The feature extraction layer composed of the first plurality of Bi-LSTMs encode the whole sentence, then a Primary and Convolutional Capsule layer to extract the high-level features from the sentence. All the sentence encodings within a dialogue or an abstract are then stacked up 30 together and are passed through the second plurality of Bi-LSTMs (in a second
14

Bi-LSTM layer) to squeeze out the contextual information from sentences. Thereafter, the fully connected layer calculates likelihood scores and finally, the CRF layer obtains optimized label sequence for the sentences of the dialogue or the abstract. The architecture for hierarchical capsule based neural network for sequence labeling is described in detail with reference to FIGS. 1-7. 5
[036] Although the present disclosure is explained considering that the system 102 is implemented on a server, it may be understood that the system 102 may also be implemented in a variety of computing systems 104, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. It will be understood that the system 102 10 may be accessed through one or more devices 106-1, 106-2... 106-N, collectively referred to as devices 106 hereinafter, or applications residing on the devices 106. Examples of the devices 106 may include, but are not limited to, a portable computer, a personal digital assistant, a handheld device, a Smartphone, a tablet computer, a workstation and the like. The devices 106 are communicatively 15 coupled to the system 102 through a network 108.
[037] In an embodiment, the network 108 may be a wireless or a wired network, or a combination thereof. In an example, the network 108 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide 20 area network (WAN), the internet, and such. The network 106 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each 25 other. Further, the network 108 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 108 may interact with the system 102 through communication links.
[038] As discussed above, the system 102 may be implemented in a 30 computing device 104, such as a hand-held device, a laptop or other portable
15

computer, a tablet computer, a mobile phone, a PDA, a smartphone, and a desktop computer. The system 102 may also be implemented in a workstation, a mainframe computer, a server, and a network server. In an embodiment, the system 102 may be coupled to a data repository, for example, a repository 112. The repository 112 may store data processed, received, and generated by the 5 system 102. In an alternate embodiment, the system 102 may include the data repository 112.
[039] The network environment 100 supports various connectivity options such as BLUETOOTH®, USB, ZigBee and other cellular services. The network environment enables connection of devices 106 such as Smartphone with 10 the server 104, and accordingly with the database 112 using any communication link including Internet, WAN, MAN, and so on. In an exemplary embodiment, the system 102 is implemented to operate as a stand-alone device. In another embodiment, the system 102 may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of 15 the system 102 are described further in detail with reference to FIGS.2A-7.
[040] Referring collectively to FIGS. 2A-6, components and functionalities of the system 102 for sequence labeling using hierarchical capsule based NN is described in accordance with an example embodiment. For example, FIGS. 2A-2B illustrates a flow diagram for a method for sequence labeling using 20 hierarchical capsule based NN, in accordance with an example embodiment of the present disclosure. FIG. 3 illustrates an example block diagram of the hierarchical capsule based NN, in accordance with an example embodiment, of the present disclosure. FIG. 4 illustrates a sentence encoder of the hierarchical capsule based NN of FIG. 3, in accordance with an example embodiment of the present 25 disclosure. FIG. 5 illustrates a capsule layer of the sentence encoder of the hierarchical capsule based NN of FIG. 3. FIG. 6 illustrates a document encoder and a CRF layer of the hierarchical capsule based NN of FIG. 3, in accordance with an example embodiment of the present disclosure.
[041] Mathematically sequence labeling can be described as below: 30
16

Given an evidence E = (e1,e2,..,en) about a particular event (for example, previous utterances in a conversation), a set of class sequence O = (o1, o2,..,on) is to be determined that has the highest posterior probability P(O|E) given the evidence E.
[042] Expanding the formula and applying the Baye’s rule, 5

[043] Herein, P(O) represents the prior probability of the class sequence, and P(E|O) is the likelihood of O given the evidence E, denominator P(E) is common across all the calculations and could be ignored. An example demonstrating evidences (E) and label sequence (O) is demonstrated in Table I below: 10

Table I: Example of evidences (e) and label sequence (o)

Conversation Evidence (E) Label (O)
Hi, good morning are you Greeting
I’m fine, thank you. How are you Greeting
I’m fine too. Thanking
Can I ask you a question Y/N Question
sure Yes answer
When was our deal concluded Question
Last Thursday Answer
Oh, okay. Thanks Thanking
Sure, no problem Acknowledgment

[044] As is illustrated in Table I, the conversation includes a plurality of sentences such as “Hi, good morning are you”, “I’m fine, thank you. How are you”, “I’m fine too”, “Can I ask you a question”, and so on. The disclosed 15 hierarchical capsule based neural network is configured to predict the labels such as labels Greetings, thanking, Y/N questions, and so on for the sequence of sentences of the conversation. The method for sequence labelling is described further below.
17

[045] At 202 of method 200, a hierarchical capsule based neural network for sequence labeling is employed. A hierarchical capsule based neural network 300 (illustrated with reference to FIG. 3) includes a sentence encoding layer 302, a document encoding layer 304, a feed forward layer 306 and a CDF layer 308. For the sake of brevity of description, the term ‘sentence encoding layer’ may be 5 used interchangeably with the term ‘sentence encoder’. The sentence encoding layer 302 (illustrated with reference to FIG. 4) includes a word embedding layer 402, a feature extraction layer 404, a primary capsule layer 406. Referring to FIG. 5, the document encoding layer 304 (including a second Bi-LSTM layer), the fully connected layer 306 and the CRF layer 308. 10
[046] At 204, the method 200 includes determining an initial sentence representation for each of the plurality of sentences associated with the task (for example, the conversation). The initial sentence representation (410) is determined by the word embedding layer 402 of the sentence encoder 400. Each of the sentence representations includes a concatenated embedding vector. The 15 concatenated embedding vector includes a fixed-length vector vi corresponding to each word wi of the sentence. The fixed-length vector corresponding to a sentence is representative of lexical-semantics of words of the sentence. In an embodiment, the fixed-length vector vi corresponding to a word wi is obtained a from a ‘weight matrix’ W ∈ Rdword X |V|, where dword is the vector dimension and |V | is the 20 vocabulary size. Each column j of weight matrix corresponds to a vector Wj ∈ Rdword X |V| for the jth word in vocabulary. Each vi represents the lexical-semantics of words obtained after pre-training from a large corpus through an unsupervised training.
[047] At 206 of method 200, the feature extraction layer 404 composed 25 of a first plurality of Bi-LSTMs encodes contextual semantics between the words within a sentence of the plurality of sentences using the concatenated embedding vector associated with each sentence to obtain a plurality of context vectors. For example, for a sentence of length N, the concatenated embedding vectors may be [v1;v2..,vN]. The contextual semantics between words within a sentence is encoded 30 through it. The output from feature extraction layer is Ci = [ ⃗⃗ ⃖⃗⃗]∈ R2 X dsen for a
18

word wi where, ⃗⃗ and ⃗⃗ right and left contexts (hidden activations), and dsen is number of LSTM units. Finally, for all the N words,
C =[C1,C2, …,CN] ∈ RN X (2Xdsen)
[048] At 208 of method 200, the plurality of context vectors (Ci) are convolved while skipping one or more context vectors in between to obtain a 5 capsule map. The capsule map includes a plurality of contextual capsules associated with the plurality of sentences. The plurality of contextual capsules connected in multiple levels using shared transformation matrices and a routing model, as is described below.
[049] The plurality of context vectors are convolved by the primary 10 capsule layer of the sentence encoder 408. The capsules replace singular scalar outputs by local “capsules” which are vectors of highly informative outputs known as “instantiated parameters”. In text processing, these instantiated parameters can be hypothesized as local orders of the words and their semantic representation. In an embodiment, to capture the semantics and cover a large part 15 of a sentence, the primary capsule layer 408 includes a filter (or shared window) Wb which convolves with the adjacent context vectors (Ci, Ci+1, …) as well as with distant context vectors Ci+dr, skipping a number of context vectors. A number (or count) of context vectors that are skipped may be referred to as dilation rate (dr) (marked as label 406 in FIG. 4). 20
[050] For context vectors Ci, a shared window with holes Wb ∈ R (2 X dsen) X d where, d is the capsule dimension, convolving with Ci’s separated at a distance of dr to cover a large part of sentence. Shared window Wb multiplies vectors in with stride of one to get a capsule pi,

pi = g(WbCi) 25
where, g is a non-linear squash function.
[051] The non-linear activation squash keeps the smaller and higher probability to around 0 and 1 respectively. After the convolution operation, a capsule feature map (P) is created:

P = [p1,p2,…,pC] ∈ R (N X C X )d stacked with total N X C d-dimensional 30 capsules representing the contextual capsules. 19

[052] In an embodiment, iterative dynamic routing model is used to introduce a coupling effect where the agreement between lower level capsules (layer l) and higher level capsules (l+1) is maintained. In an example scenario, if contextual capsules with low level features at layer l is “m”, and contextual capsules at layer (l+1) is “n” then, for a capsule j at layer (l+1), the output vector 5 can be computed by:

Σ ̂ ; ̂ =
where, cij is the coupling coefficient between capsule i of layer l to capsule j of layer (l+1) and are determined by iterative dynamic routing, Ws is the shared weight matrix between the layers l and l+1 (FIG. 5). In an 10 embodiment, softmax function is utilized for computations. The softmax function is used over all the b’s to determine the connection strength between the capsules. The coupling coefficients cij are calculated iteratively in ‘r’ rounds by: Σ
[053] Logits bij which are initially same, determines how strongly the 15 capsules j should be coupled with capsule i. Consequently, in spite of using RNNs, which are said to be biased for extremes, by combining the dilation operation as shown in FIG.6 inside Bi-LSTM with dynamic routing between capsules, the disclosed system is able to focus in between the sentence also.
[054] At 210, the method 200 includes computing, by the Convolutional 20 Capsule Layer 306, a final sentence representation for the plurality of sentences. In the Convolutional Capsule Layer 306, the capsules are connected to lower level capsules which determine the child-parent relationship by multiplying the shared transformation matrices followed by the routing algorithm. In an embodiment, the final sentence representation is calculated by determining coupling strength 25 between child-parent pair contextual capsules ̂ .

̂
where, ui is the child capsule and Ws is shared weight between capsules i and j. 20

[055] Finally, the coupling strength between the child-parent capsule is determined by the routing algorithm to produce the parent feature map. The capsules are then flattened out into a single layer and then multiplied by a transformation matrix WFC followed by routing algorithm to compute the final sentence representation (sk). 5
[056] After getting all the sentence representation for a dialogue/utterance, the contextual information between the plurality of sentences is captured at 212. The contextual information between the plurality of sentences is captured by using the second Bi-LSTM layer (having the second plurality of Bi-LSTMs 304) that takes sentences at every time step and produces a sequence of 10 hidden state vectors [ ] corresponding to each of the plurality of sentences (M). At 214, the hidden state vectors are passed through the feed forward layer 306 to output vectors o∈ Ra , where a is the total number of possible labels for a statement. The output vectors o provides likelihood scores/probabilities for possible labels for each statement of the plurality of 15 statements.
[057] At 216, the CRF layer 308 obtains optimized label sequence for the plurality of sentences based at least on a sum of possible labels weighted by the likelihood scores. The CRF Layer 308 adds some constraint for the final valid prediction labels (or possible labels). For example, in natural Question Answering 20 Dialogue it would be natural to answer a Yes-No-Answer, if the responder has been asked a Yes-No-Question, similarly in an abstract, first the Objective is clearly defined before moving onto its Solution. CRF layer can induce some constraint to such patterns to generate a final valid inference based on the training data. To model such dependencies, a transition matrix T ∈ RaXa is used, where a is 25 the number of possible labels. An entry T[ai,aj] corresponds to a weight score for transitioning from label i to label j. Score for a label sequence [y1, y2, …,yM] is calculated by the sum of labels yi weighted by the probabilities oi computed in previous layer (FIG. 6) and, a transition scores of moving from label yi-1 to label yi. 30

Σ Σ 21

[058] Finally, taking a softmax over all possible tag sequence yields a probability for the sequence [y1,y2,…yM]

P(y1,y2,…yM) = Σ ̃ ̃∈
Here, ̃ is score for all possible sequences ̃ in Y.
[059] During training, the log-probability of correct labels provided in 5 the training data is maximized, and while decoding, the output sequence can be predicted with the maximum scores calculated by Viterbi Algorithm. An example scenario of sequence labeling by using the proposed capsule based hierarchical neural network is described further below.
[060] FIG. 7 is a block diagram of an exemplary computer system 701 10 for implementing embodiments consistent with the present disclosure. The computer system 701 may be implemented in alone or in combination of components of the system 102 (FIG. 1). Variations of computer system 701 may be used for implementing the devices included in this disclosure. Computer system 701 may comprise a central processing unit (“CPU” or “hardware 15 processor”) 702. The hardware processor 702 may comprise at least one data processor for executing program components for executing user- or system-generated requests. The processor may include specialized processing units such as integrated system (bus) controllers, memory management control units, floating point units, graphics processing units, digital signal processing units, etc. The 20 processor may include a microprocessor, such as AMD AthlonTM, DuronTM or OpteronTM, ARM’s application, embedded or secure processors, IBM PowerPCTM, Intel’s Core, ItaniumTM, XeonTM, CeleronTM or other line of processors, etc. The processor 702 may be implemented using mainframe, distributed processor, multi-core, parallel, grid, or other architectures. Some 25 embodiments may utilize embedded technologies like application specific integrated circuits (ASICs), digital signal processors (DSPs), Field Programmable Gate Arrays (FPGAs), etc.
[061] Processor 702 may be disposed in communication with one or more input/output (I/O) devices via I/O interface 703. The I/O interface 703 may 30
22

employ communication protocols/methods such as, without limitation, audio, analog, digital, monoaural, RCA, stereo, IEEE-1394, serial bus, universal serial bus (USB), infrared, PS/2, BNC, coaxial, component, composite, digital visual interface (DVI), high-definition multimedia interface (HDMI), RF antennas, S-Video, VGA, IEEE 802.11 a/b/g/n/x, Bluetooth, cellular (e.g., code-division 5 multiple access (CDMA), high-speed packet access (HSPA+), global system for mobile communications (GSM), long-term evolution (LTE), WiMax, or the like), etc.
[062] Using the I/O interface 703, the computer system 701 may communicate with one or more I/O devices. For example, the input device 704 10 may be an antenna, keyboard, mouse, joystick, (infrared) remote control, camera, card reader, fax machine, dongle, biometric reader, microphone, touch screen, touchpad, trackball, sensor (e.g., accelerometer, light sensor, GPS, gyroscope, proximity sensor, or the like), stylus, scanner, storage device, transceiver, video device/source, visors, etc. 15
[063] Output device 705 may be a printer, fax machine, video display (e.g., cathode ray tube (CRT), liquid crystal display (LCD), light-emitting diode (LED), plasma, or the like), audio speaker, etc. In some embodiments, a transceiver 706 may be disposed in connection with the processor 702. The transceiver may facilitate various types of wireless transmission or reception. For 20 example, the transceiver may include an antenna operatively connected to a transceiver chip (e.g., Texas Instruments WiLink WL1283, Broadcom BCM4750IUB8, Infineon Technologies X-Gold 618-PMB9800, or the like), providing IEEE 802.11a/b/g/n, Bluetooth, FM, global positioning system (GPS), 2G/3G HSDPA/HSUPA communications, etc. 25
[064] In some embodiments, the processor 702 may be disposed in communication with a communication network 708 via a network interface 707. The network interface 707 may communicate with the communication network 708. The network interface may employ connection protocols including, without limitation, direct connect, Ethernet (e.g., twisted pair 10/100/1000 Base T), 30 transmission control protocol/internet protocol (TCP/IP), token ring, IEEE
23

802.11a/b/g/n/x, etc. The communication network 708 may include, without limitation, a direct interconnection, local area network (LAN), wide area network (WAN), wireless network (e.g., using Wireless Application Protocol), the Internet, etc. Using the network interface 707 and the communication network 708, the computer system 701 may communicate with devices 709 and 710. These devices 5 may include, without limitation, personal computer(s), server(s), fax machines, printers, scanners, various mobile devices such as cellular telephones, smartphones (e.g., Apple iPhone, Blackberry, Android-based phones, etc.), tablet computers, eBook readers (Amazon Kindle, Nook, etc.), laptop computers, notebooks, gaming consoles (Microsoft Xbox, Nintendo DS, Sony PlayStation, 10 etc.), or the like. In some embodiments, the computer system 701 may itself embody one or more of these devices.
[065] In some embodiments, the processor 702 may be disposed in communication with one or more memory devices (e.g., RAM 713, ROM 714, etc.) via a storage interface 712. The storage interface may connect to memory 15 devices including, without limitation, memory drives, removable disc drives, etc., employing connection protocols such as serial advanced technology attachment (SATA), integrated drive electronics (IDE), IEEE-1394, universal serial bus (USB), fiber channel, small computer systems interface (SCSI), etc. The memory drives may further include a drum, magnetic disc drive, magneto-optical drive, 20 optical drive, redundant array of independent discs (RAID), solid-state memory devices, solid-state drives, etc. Variations of memory devices may be used for implementing, for example, any databases utilized in this disclosure.
[066] The memory devices may store a collection of program or database components, including, without limitation, an operating system 716, user interface 25 application 717, user/application data 718 (e.g., any data variables or data records discussed in this disclosure), etc. The operating system 716 may facilitate resource management and operation of the computer system 701. Examples of operating systems include, without limitation, Apple Macintosh OS X, Unix, Unix-like system distributions (e.g., Berkeley Software Distribution (BSD), FreeBSD, 30 NetBSD, OpenBSD, etc.), Linux distributions (e.g., Red Hat, Ubuntu, Kubuntu,
24

etc.), IBM OS/2, Microsoft Windows (XP, Vista/7/8, etc.), Apple iOS, Google Android, Blackberry OS, or the like. User interface 717 may facilitate display, execution, interaction, manipulation, or operation of program components through textual or graphical facilities. For example, user interfaces may provide computer interaction interface elements on a display system operatively connected to the 5 computer system 701, such as cursors, icons, check boxes, menus, scrollers, windows, widgets, etc. Graphical user interfaces (GUIs) may be employed, including, without limitation, Apple Macintosh operating systems’ Aqua, IBM OS/2, Microsoft Windows (e.g., Aero, Metro, etc.), Unix X-Windows, web interface libraries (e.g., ActiveX, Java, Javascript, AJAX, HTML, Adobe Flash, 10 etc.), or the like.
[067] In some embodiments, computer system 701 may store user/application data 718, such as the data, variables, records, etc. as described in this disclosure. Such databases may be implemented as fault-tolerant, relational, scalable, secure databases such as Oracle or Sybase. Alternatively, such databases 15 may be implemented using standardized data structures, such as an array, hash, linked list, structured text file (e.g., XML), table, or as hand-oriented databases (e.g., using HandStore, Poet, Zope, etc.). Such databases may be consolidated or distributed, sometimes among various computer systems discussed above. It is to be understood that the structure and operation of any computer or database 20 component may be combined, consolidated, or distributed in any working combination.
[068] Additionally, in some embodiments, the server, messaging and instructions transmitted or received may emanate from hardware, including operating system, and program code (i.e., application code) residing in a cloud 25 implementation. Further, it should be noted that one or more of the systems and methods provided herein may be suitable for cloud-based implementation. For example, in some embodiments, some or all of the data used in the disclosed methods may be sourced from or stored on any cloud computing platform.
[069] In the example scenario, the hierarchical capsule based neural 30 network, for example NN 300 is trained using datasets, such as data sets SwDA
25

CorpusTM, PUBMEDTM, and NICTA-PIBOSOTM, and experimental results were obtained, as described herein.
[070] SwDA CorpusTM: The Switchboard Corpus contains 1155 human conversations recorded over telephone communication. The Switchboard corpus has total of more than 2M utterances initially divided in about 220 tags. SwDA 5 coders manual suggested clustering of these tags into 42 different tags. These 42 tags aimed at facilitating machine learning over the Dialogue Act (DA) annotated part of Switchboard corpus. The hierarchical tag-set (220 tags) was further compressed into single atomic labels, which captures the individual utterance’s function and also the hierarchical information as captured by the initially designed 10 DAMSL schema (Dialogue Act Markup at Several Layers which consists of 42 tags). One of the tag, ‘+’, has been treated differently, some of them concatenate two consecutive user utterances when a ‘+’ tag is present, while some others simply ignore the ‘+’ tag. In the experiments herein, both the approaches were adopted and the results were reported after removing ‘+’ tag and, concatenating 15 the current user’s last utterance with the current one. The SwDA corpus has one problem of class distribution which ranges from maximum 36% to minimum < 0:1% of full SwDA corpus (Table II).
[071] One of the early results of experiments on Switchboard corpus was published that reported the mixture of Neural Network models and HMMs. 20 However, there was no clear distinction of train-dev-test split. To maintain consistency with results that follow a common ground split is utilized.

TABLE: II - SWDA class distribution

Dialogue-Act %of Corpus Dialogue-Act %of Corpus
statement-non-opinion 36% Collaborative completion 0.40%
acknowledge 19% repeat-phrase 0.30%
statement-opinion 13% open-question 0.30%
abandoned/uninterpretable 6% rhetorical-questions 0.20%
agree/accept 5% hold before answer 0.20%
Appreciation 2% reject 0.20%
yes-no-question 2% negative non-no answers 0.10%
non-verbal 2% signal-non-understanding 0.10%

yes answers 1% other answers 0.10%
conventional-closing 1% conventional-opening 0.10%
wh-question 1% or-clause 0.10%
no answers 1% dispreferred answers 0.10%
response acknowledgement 1% 3rd-party-talk 0.10%
hedge 1% offers, options commits 0.10%
declarative yes-no-question 1% self-talk 0.10%
other 1% down-player 0.10%
back-channel in question form 1% maybe/ accept part < 0.1%
quotation 0.50% tag-question < 0.1%
summarize/ reformulate 0.50% declarative wh-question < 0.1%
affirmative non-yes answers 0.40% thanking < 0.1%
action-directive 0.40% Apology < 0.1%

[072] PUBMED: The dataset is collection of sentences obtained from medical abstracts and randomized controlled trials (RCT). It is derived from the PubMedTM database of biomedical literature and contains approximately 200,000 abstracts of randomized controlled trials, and total of upto 2.3M sentences. For training the model, PUBMED dataset which is largest medical abstract dataset 5 was released in two configurations, a 20K and 200K abstracts training data with each sentence labeled with one of class from: background, objective, method, result, and conclusion. Same train-dev-test splits was used. An example abstract (PMID:18554189) is shown in Table III
Table III: An example abstract (PMID:18554189)

Example Label Example Sentence
OBJECTIVE To evaluate human pulp tissue response following direct pulp capping with a self-etching adhesive : Clearfil SE BOND ( SB ) .
METHODS Forty-five sound teeth from 20 subjects were used.
METHODS Forty-one teeth had their pulp mechanically exposed at the base of a Class 1 cavity preparation and were divided into two groups : group 1 , teeth were capped with SB ( n = 21 ) , and group 2 , with calcium hydroxide cement ( CH ) ( n = 20 ).
METHODS Four teeth were maintained intact as an untreated control group.
METHODS After 7, 30 and 90 days, respectively, 15 teeth were extracted and processed for light microscopic examination

METHODS Pulp healing and bacterial microleakage were assessed by haematoxylin and eosin , Masson trichrome and Brown and Brenn stain techniques
METHODS The data were analysed statistically by using the Mann-Whitney U test .
RESULTS After the 7-day observation period , the inflammatory reaction in the SB group was slight and significantly less severe than that of the CH group ( P <0.05 )
RESULTS After the 30 - and 90-day observation periods , the inflammatory reaction was slight in both groups , but specimens with dentine bridge formation in the SB group were significantly less common than those in the CH group ( P <0.05 )
RESULTS Clearfil SB had good biocompatibility with human pulp tissue , but its ability to induce reparative dentine was significantly lower than that of calcium hydroxide

[073] NICTA-PIBOSO The dataset was released for shared ALTA 2012 dataset, which has objective of building classifiers for automatically labeling sentences to pre-defined categories. The dataset was collected from the domain of Evidence Based Machine (EBM) and each sentence is labeled with one of classes 5 from: Population, Intervention, Background, Outcome, Study -Design and Other. The dataset has about 1000 abstracts, 11616 sentences and can be used for sequence labeling task. Class Distribution for all these scientific abstract datasets has been given in Table IV.
Table IV: Class distribution for scientific abstract datasets

Datasets Classes Train Validation Test
PUBMED 20k 5 720 80 200
PUBMED 200k 5 15k 2.5k 2.5k
NICTA 6 190k 2.5k 2.5k

[074] Training Details: For training the proposed capsule based hierarchical neural network model, the training data as specified in the papers corresponding to different datasets were used. To initialize the words the 300 dimension GloVe Embeddings were used for SwDA and 200 dimension PubMed-15 word2vec embeddings for Scientific Abstracts. Bi-LSTMs (for example, the first

plurality and the second plurality of Bi-LSTMs) were used for the sentence and document encoder in all the experiments which takes into account the sentences preceding and following the current utterance while the baseline for SwDA doesn’t. For NICTA, number of LSTMs used in sentence and document encoder was kept to 300 (150 Left and Right contexts), similarly in PUBMED and SwDA 5 these numbers were kept to 400 and 500 units respectively. The dilation rate dr was in range [2-5]. A total of 20 capsules each with dimension d of 16 were used for all the experiments. Also the same routing value r of 3 was used across all the experiments because using the larger value may result in overfitting of test data.
[075] The performance of the proposed architecture is evaluated on these 10 4 datasets (Pubmed has 2 parts, 20k and 200k), the numbers are reported in Table V. In SwDA corpus the model outperformed the state-of-art by about 2%. For the remaining three scientific abstracts data PUBMED 20k, 200k and NICTA, the results achieved are comparable with published state-of-art models.
Table V: Performance on 4 publicly available datasets

Datasets Best Reported Accuracy Proposed system
SwDA 75.8 77.9
PUBMED 20k 92.6 92.8
PUBMED 200k 93.9 94.07
NICTA 84.7 85.4

[076] As is seen from above, the proposed capsule based hierarchical neural network model for sequence labeling has achieved state-of-art results on two different domain datasets (Dialogue Systems and Scientific Abstract Classification). Through results, the efficacy of the proposed system and method 20 on these complex NLP task it has been demonstrated by allowing the model to capture the representation of a sentence or a document, which is enriched with the contextual information.
[077] The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope 25 of the subject matter embodiments is defined by the claims and may include other

modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
[078] Various embodiments disclosed herein provide method and system 5 for sequence labeling using capsule based hierarchical neural network. The proposed system uses a layer of Bi-LSTMs to first obtain a sentence representation and then performs dilation on it along with applying capsule to get a sentence representation. Finally after obtaining all such sentence representations from multiple sentences of a task (such as an abstract/conversation), the system 10 utilizes CRF to get optimum sequence labeling. The embodiments of present disclosure herein utilizes a Hierarchical structure of the capsule based neural network that can aid them in dissecting the input structure at different levels to allow flow of context between sentences. Said Hierarchical structure facilitates in understanding the discourse/abstract structure and predict next probable label by 15 using label history.
[079] It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a 20 server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-25 programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments
described herein could be implemented in hardware and software. The device may

also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs.
[080] The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed 5 by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, 10 apparatus, or device.
[081] The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not 15 limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent 20 to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing 25 of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
[082] Furthermore, one or more computer-readable storage media may be 30 utilized in implementing embodiments consistent with the present disclosure. A

computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term 5 “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media. 10
[083] It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.

We Claim:
1. A processor implemented method, comprising:

employing, via one more hardware processors, a hierarchical capsule based neural network for sequence labeling, the hierarchical capsule based neural network comprising a sentence encoding layer, a document encoding layer, a fully 5 connected layer and a conditional random fields (CRF) layer, the sentence encoding layer comprising a word embedding layer, a feature extraction layer composed of a first plurality of Bi-LSTMs, a primary capsule layer, and convolutional capsule layers, and the document encoding layer comprising a second plurality of Bi-LSTMs, wherein employing comprises: 10
determining, by the word embedding layer, initial sentence representations for a plurality of sentences associated with a task, each of the initial sentence representations comprising a concatenated embedding vector;
encoding, by the first plurality of Bi-LSTMs, contextual semantics 15 between words within a sentence of the plurality of sentences using the associated concatenated embedding vector to obtain a plurality of context vectors (Ci);
convolving with the plurality of context vectors (Ci) while skipping one or more context vectors in between, by the primary capsule layer 20 comprising a filter, to obtain a capsule map comprising a plurality of contextual capsules associated with the plurality of sentences, the plurality of contextual capsules connected in multiple levels using shared transformation matrices and a routing model;
computing, by the Convolutional Capsule Layer, a final sentence 25 representation for the plurality of sentences by determining coupling strength between child-parent pair contextual capsules of the plurality of contextual capsules connected in the multiple levels;
obtaining, by a second plurality of Bi-LSTMs, contextual information between the plurality of sentences using the final sentence 30 representations associated with the plurality of sentences, wherein the second 33

plurality of Bi-LSTMs takes sentences at multiple time steps as input and produces a sequence of hidden state vectors corresponding to each of the plurality of sentences;
passing the hidden state vectors through the feed forward layer to output likelihood scores for probable labels for each statement of the plurality of 5 statements; and
determining optimized label sequence for the plurality of sentences by the CRF layer based at least on a sum of probable labels weighted by the likelihood scores.
10
2. The method as claimed in claim 1, wherein the concatenated embedding vector comprises a fixed-length vector corresponding to each word of the plurality of sentences, a fixed-length vector corresponding to a sentence being representative of lexical-semantics of words of the sentence.

15
3. The method as claimed in claim 1, wherein a number of the one or more vectors is a dilation rate (dr).

4. The method as claimed in claim 1, wherein a context vector of the plurality of context vectors associated with a word comprises a right 20 context and a left context between the word and adjacent words.

5. The method as claimed in claim 1, wherein the filter (Wb) multiplies context vectors in with stride of one to obtain a capsule pi,

where, pi = g(WbCi), and 25
g is a non-linear squash function.
6. The method as claimed in claim 1, wherein determining the optimized label sequence comprises calculating, by the CRF layer, probability score for the label sequence associated with the plurality of sentences based on 30
34

the sum of possible labels weighted by the likelihood scores and transition scores of moving from one label to another label.

7. A system (701) for sequence labeling comprising:

one or more memories (704); and 5
one or more first hardware processors (702), the one or more first memories (704) coupled to the one or more first hardware processors (702), wherein the one or more first hardware processors (702) are configured to execute programmed instructions stored in the one or more first memories (704):
employ a hierarchical capsule based neural network for sequence 10 labeling, the hierarchical capsule based neural network comprising a sentence encoding layer, a document encoding layer, a fully connected layer and a conditional random fields (CRF) layer, the sentence encoding layer comprising a word embedding layer, a feature extraction layer composed of a first plurality of Bi-LSTMs, a primary capsule layer, and convolutional capsule layers, and the 15 document encoding layer comprising a second plurality of Bi-LSTMs, wherein to employ hierarchical capsule based neural network , the one or more hardware processors are configured by the instructions to:
determine, by the word embedding layer, initial sentence representations for a plurality of sentences associated with a task, each of 20 the initial sentence representations comprising a concatenated embedding vector;
encode, by the first plurality of Bi-LSTMs, contextual semantics between words within a sentence of the plurality of sentences using the associated concatenated embedding vector to obtain a plurality 25 of context vectors (Ci);
convolve with the plurality of context vectors (Ci) while skipping one or more context vectors in between, by the primary capsule layer comprising a filter, to obtain a capsule map comprising a plurality of contextual capsules associated with the plurality of sentences, the plurality of contextual 30 35

capsules connected in multiple levels using shared transformation matrices and a routing model;
compute, by the Convolutional Capsule Layer, a final sentence representation for the plurality of sentences by determining coupling strength between child-parent pair contextual capsules of the plurality of 5 contextual capsules connected in the multiple levels;
obtain, by a second plurality of Bi-LSTMs, contextual information between the plurality of sentences using the final sentence representations associated with the plurality of sentences, wherein the second plurality of Bi-LSTMs takes sentences at multiple time steps as input and 10 produces a sequence of hidden state vectors corresponding to each of the plurality of sentences;
pass the hidden state vectors through the feed forward layer to output likelihood scores for probable labels for each statement of the plurality of statements; and 15
determine optimized label sequence for the plurality of sentences by the CRF layer based at least on a sum of probable labels weighted by the likelihood scores.
8. The system as claimed in claim 7, wherein the concatenated embedding 20 vector comprises a fixed-length vector corresponding to each word of the plurality of sentences, a fixed-length vector corresponding to a sentence being representative of lexical-semantics of words of the sentence.

9. The system as claimed in claim 7, wherein a number of the one or more 25 vectors is a dilation rate (dr).

10. The system as claimed in claim 7, wherein a context vector of the plurality of context vectors associated with a word comprises a right context and a left context between the word and adjacent words. 30

11. The system as claimed in claim 7, wherein the filter (Wb) multiplies context vectors in with stride of one to obtain a capsule pi,

where, pi = g(WbCi), and
g is a non-linear squash function.

12. The system as claimed in claim 7, wherein to determine the optimized label sequence, the one or more hardware processors are configured by the instruction to calculate, by the CRF layer, probability score for the label sequence associated with the plurality of sentences based on the sum of possible labels weighted by the likelihood scores and transition scores of 10 moving from one label to another label.

Documents

Application Documents

# Name Date
1 201921025909-STATEMENT OF UNDERTAKING (FORM 3) [28-06-2019(online)].pdf 2019-06-28
2 201921025909-REQUEST FOR EXAMINATION (FORM-18) [28-06-2019(online)].pdf 2019-06-28
3 201921025909-FORM 18 [28-06-2019(online)].pdf 2019-06-28
4 201921025909-FORM 1 [28-06-2019(online)].pdf 2019-06-28
5 201921025909-FIGURE OF ABSTRACT [28-06-2019(online)].jpg 2019-06-28
6 201921025909-DRAWINGS [28-06-2019(online)].pdf 2019-06-28
7 201921025909-DECLARATION OF INVENTORSHIP (FORM 5) [28-06-2019(online)].pdf 2019-06-28
8 201921025909-COMPLETE SPECIFICATION [28-06-2019(online)].pdf 2019-06-28
9 Abstract1.jpg 2019-10-05
10 201921025909-Proof of Right (MANDATORY) [12-11-2019(online)].pdf 2019-11-12
11 201921025909-FORM-26 [15-11-2019(online)].pdf 2019-11-15
12 201921025909-ORIGINAL UR 6(1A) FORM 1-141119.pdf 2019-11-16
13 201921025909-ORIGINAL UR 6(1A) FORM 26-181119.pdf 2019-11-20
14 201921025909-Request Letter-Correspondence [13-07-2020(online)].pdf 2020-07-13
15 201921025909-FER.pdf 2021-10-19
16 201921025909-CORRPONDENCE(IPO)-(CERTIFIED COPY OF WIPO DAS)-(21-7-2020).pdf 2021-10-19
17 201921025909-RELEVANT DOCUMENTS [27-10-2021(online)].pdf 2021-10-27
18 201921025909-PETITION UNDER RULE 137 [27-10-2021(online)].pdf 2021-10-27
19 201921025909-OTHERS [27-10-2021(online)].pdf 2021-10-27
20 201921025909-FORM 3 [27-10-2021(online)].pdf 2021-10-27
21 201921025909-FER_SER_REPLY [27-10-2021(online)].pdf 2021-10-27
22 201921025909-DRAWING [27-10-2021(online)].pdf 2021-10-27
23 201921025909-CORRESPONDENCE [27-10-2021(online)].pdf 2021-10-27
24 201921025909-COMPLETE SPECIFICATION [27-10-2021(online)].pdf 2021-10-27
25 201921025909-CLAIMS [27-10-2021(online)].pdf 2021-10-27
26 201921025909-ABSTRACT [27-10-2021(online)].pdf 2021-10-27
27 201921025909-US(14)-HearingNotice-(HearingDate-01-03-2024).pdf 2024-02-05
28 201921025909-FORM-26 [08-02-2024(online)].pdf 2024-02-08
29 201921025909-Correspondence to notify the Controller [16-02-2024(online)].pdf 2024-02-16
30 201921025909-Written submissions and relevant documents [12-03-2024(online)].pdf 2024-03-12
31 201921025909-PatentCertificate13-05-2024.pdf 2024-05-13
32 201921025909-IntimationOfGrant13-05-2024.pdf 2024-05-13

Search Strategy

1 search201912105909E_13-05-2021.pdf
2 D2E_13-05-2021.pdf
3 D1E_13-05-2021.pdf

ERegister / Renewals

3rd: 24 Jul 2024

From 28/06/2021 - To 28/06/2022

4th: 24 Jul 2024

From 28/06/2022 - To 28/06/2023

5th: 24 Jul 2024

From 28/06/2023 - To 28/06/2024

6th: 24 Jul 2024

From 28/06/2024 - To 28/06/2025

7th: 09 May 2025

From 28/06/2025 - To 28/06/2026