Abstract: A method (300) and system (100) of determining test procedures using large language models (LLMs) is disclosed. A processor (104) receives a plurality of domain-based documents corresponding to a test product to be tested. One or more user requirements are determined corresponding to at least one feature of the test product from one of the plurality of domain-based documents. A plurality of domain-specific keywords is determined from the one or more user requirements. A contextual data is determined by extracting a portion of a text data from the plurality of domain-based documents. A knowledge dataset is determined by prompting a second LLM based on a second prompt and the contextual data. One or more test procedures for one or more test cases for testing the at least one feature of the test product by prompting a third LLM based on a third prompt and the knowledge dataset. [To be published with FIG. 1]
1. A method (300) of determining test procedures using large language models (LLMs), the
method comprising:
receiving (302), by a processor (104), a plurality of domain-based documents
corresponding to a test product to be tested;
determining (304), by the processor (104), one or more user requirements
corresponding to at least one feature of the test product from one of the plurality of domainbased documents;
determining (306), by the processor (104), a plurality of domain-specific keywords
from the one or more user requirements by prompting a first LLM based on a first prompt;
determining (308), by the processor (104), contextual data by extracting a portion of
text data from the plurality of domain-based documents,
wherein the portion of text data comprises one or more of the plurality of
domain-specific keywords;
determining (310), by the processor (104), a knowledge dataset by prompting a second
LLM based on a second prompt and the contextual data,
wherein the knowledge dataset comprises a set of questions and a set of answers
to each of the set of questions based on the contextual data and the text data of the plurality of
domain-based documents, and
wherein the second prompt is engineered to prompt the second LLM to list the
set of questions and the set of answers to each of the set of questions based on the
contextual data and the text data of the plurality of domain-based documents; and
determining (314), by the processor (104), one or more test procedures for one or more
test cases for testing the at least one feature of the test product by prompting a third LLM based
on a third prompt and the knowledge dataset,
wherein the third prompt comprises the one or more test cases for testing the at
least one feature of the test product.
2. The method (300) as claimed in claim 1, wherein the third prompt is determined as an output
generated by a fourth LLM queried based on a fourth prompt,
wherein the third prompt is engineered to output the one or more test procedures for the
one or more test cases based on the knowledge dataset and the one or more user requirements,
and
21
wherein the third prompt is engineered to prompt the third LLM to output the one or
more test procedures.
3. The method (300) as claimed in claim 2, wherein each of the one or more test procedures
comprises at least one pre-condition, a set action steps to be performed for testing the test
product, and at least one expected outcome for the at least one pre-condition.
4. The method (300) as claimed in claim 1, wherein the contextual data is determined based on
determination of a positional relation between each of the plurality of domain-specific
keywords with the text data,
wherein the positional relation is determined based on a lookup of each of the plurality
of domain-specific keywords in the text data of the plurality of domain-based documents.
5. The method (300) as claimed in claim 1, wherein the first prompt is engineered to prompt
the first LLM to output the plurality of domain-specific keywords by determining a set of nouns
based on the one or more user requirements corresponding to the test product to be tested.
6. The method (300) as claimed in claim 1, wherein the fourth prompt is engineered to prompt
the fourth LLM to list the one or more test cases corresponding to the one or more user
requirements.
7. A system (100) of determining test procedures using large language models (LLMs),
comprising:
a processor (104); and
a memory (106) communicably coupled to the processor (104), wherein the memory
(106) stores processor-executable instructions, which, on execution, cause the processor (104)
to:
receive a plurality of domain-based documents corresponding to a test product to be
tested;
determine one or more user requirements corresponding to at least one feature of the
test product from one of the plurality of domain-based documents;
determine a plurality of domain-specific keywords from the one or more user
requirements by prompting a first LLM based on a first prompt;
22
determine a contextual data by extracting a portion of a text data from the plurality of
domain-based documents,
wherein the portion of the text data comprises one or more of the plurality of
domain-specific keywords;
determine a knowledge dataset by prompting a second LLM based on a second prompt
and the contextual data,
wherein the knowledge dataset comprises a set of questions and a set of answers
to each of the set of questions based on the contextual data and the text data of the
plurality of domain-based documents, and
wherein the second prompt is engineered to prompt the second LLM to list the
set of questions and the set of answers to each of the set of questions based on the
contextual data and the text data of the plurality of domain-based documents; and
determine one or more test procedures for one or more test cases for testing the at least
one feature of the test product by prompting a third LLM based on a third prompt and the
knowledge dataset,
wherein the third prompt comprises the one or more test cases for testing the at
least one feature of the test product.
8. The system (100) as claimed in claim 7, wherein the third prompt is determined as an output
generated by a fourth LLM queried based on a fourth prompt,
wherein the fourth prompt is engineered to output the one or more test procedures for
the one or more test cases based on the knowledge dataset and the one or more user
requirements, and
wherein the third prompt is engineered to prompt the third LLM to output the one or
more test procedures.
9. The system (100) as claimed in claim 8, wherein each of the one or more test procedures
comprises at least one pre-condition, a set action steps to be performed for testing the test
product, and at least one expected outcome for the at least one pre-condition.
10. The system (100) as claimed in claim 7, wherein the contextual data is determined based
on determination of a positional relation between each of each of the plurality of domainspecific keywords with the text data,
23
wherein the positional relation is determined based on a lookup of each of the plurality
of domain-specific keywords in the text data of the plurality of domain-based documents.
| # | Name | Date |
|---|---|---|
| 1 | 202441030121-STATEMENT OF UNDERTAKING (FORM 3) [12-04-2024(online)].pdf | 2024-04-12 |
| 2 | 202441030121-REQUEST FOR EXAMINATION (FORM-18) [12-04-2024(online)].pdf | 2024-04-12 |
| 3 | 202441030121-PROOF OF RIGHT [12-04-2024(online)].pdf | 2024-04-12 |
| 4 | 202441030121-POWER OF AUTHORITY [12-04-2024(online)].pdf | 2024-04-12 |
| 5 | 202441030121-FORM 18 [12-04-2024(online)].pdf | 2024-04-12 |
| 6 | 202441030121-FORM 1 [12-04-2024(online)].pdf | 2024-04-12 |
| 7 | 202441030121-DRAWINGS [12-04-2024(online)].pdf | 2024-04-12 |
| 8 | 202441030121-DECLARATION OF INVENTORSHIP (FORM 5) [12-04-2024(online)].pdf | 2024-04-12 |
| 9 | 202441030121-COMPLETE SPECIFICATION [12-04-2024(online)].pdf | 2024-04-12 |
| 10 | 202441030121-Form 1 (Submitted on date of filing) [04-06-2024(online)].pdf | 2024-06-04 |
| 11 | 202441030121-Covering Letter [04-06-2024(online)].pdf | 2024-06-04 |
| 12 | 202441030121-FORM 3 [24-07-2024(online)].pdf | 2024-07-24 |