Abstract: ABSTRACT METHOD AND SYSTEM FOR SANITIZATION OF SENSITIVE DATA This disclosure relates to a method and system for sanitization of sensitive data. The method includes analyzing (301) sensitive data present within a page of an application based on a deterministic algorithm. Further, the method includes classifying (302) the sensitive data into a high-risk sensitive data and a low-risk sensitive data based on a ML classification algorithm. For the one or more sensitive data classified as the high risk sensitive data, the method further includes performing (303) a destructive sanitization on each of the high risk sensitive data. For the one or more sensitive data classified as the high risk sensitive data, the method further includes performing (304) a non-destructive sanitization on each of the high risk sensitive data. Figure No:2
Technical Field
[001] This disclosure relates generally to data security and privacy, and more
particularly to a method and a system for sanitization of sensitive data.
Background
[002] In today’s digital era, the collection and processing of sensitive data within
10 applications have become essential in various industries such as finance, healthcare, and ecommerce. Sensitive data, including personally identifiable information (PII), financial records,
and confidential documents, needs to be protected from unauthorized access, data breaches, and
privacy violations. With a rise of data protection regulations and increasing cybersecurity threats
a need for data sanitization techniques has become even more important.
15 [003] Traditional methods of data sanitization often involve manual identification and
removal of sensitive information. However this process may be time consuming, prone to errors
and lacks consistency. Additionally, due to the nature of applications and the abundance of data
sources available today, manual methods alone are insufficient in addressing the complexities
surrounding data privacy.
20 [004] To address these challenges, there is a need for an automated and adaptive
technique to sensitive data sanitization that may be caplable of accurately identifying, categorizing,
and replacing sensitive data with synthecially generated data within applications, while
maintaining functionality and structure of the applications.
SUMMARY
25 [005] In one embodiment, a method for sanitization of sensitive data is disclosed. In one
example, the method may include analyzing one or more sensitive data present within a page of
an application based on a deterministic algorithm. Further, the method may include classifying the
one or more sensitive data into a high risk sensitive data and a low risk sensitive data based on a
Machine Learning (ML) classification algorithm. For the one or more sensitive data classified as
30 the high risk sensitive data, the method may further include performing a destructive sanitization
on each of the high risk sensitive data. The destructive sanitization may include generating
synthetic data corresponding to the high risk sensitive data and replacing the high risk sensitive
data with the synthetic data. For the one or more sensitive data classified as the high risk sensitive
data, the method may further include performing a non-destructive sanitization on each of the high
35 risk sensitive data. The non-destructive sanitization may include encrypting the high risk sensitive data through an encryption key, generating synthetic data corresponding to the
high risk sensitive data, and replacing the high risk sensitive data encrypted within the page with
the synthetic data.
25 [007] It is to be understood that both the foregoing general description and the following
detailed description are exemplary and explanatory only and are not restrictive of the invention, as
claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[008] The accompanying drawings, which are incorporated in and constitute a part of
30 this disclosure, illustrate exemplary embodiments and, together with the description, explain the
disclosed principles.
[009] FIG. 1 is a block diagram of an environment for sanitization of sensitive data, in
accordance with an exemplary embodiment of the present disclosure;
[010] FIG. 2 is a block diagram of a computing device for sanitization of sensitive data,
35 in accordance with an exemplary embodiment of the present disclosure ] FIG. 3 is a flow diagram of an exemplary process for sanitization of sensitive
data, in accordance with an exemplary embodiment of the present disclosure;
[012] FIG. 4 is a flow diagram of an exemplary process for performing destructive
sanitization on sensitive data, in accordance with an exemplary embodiment of the present
disclosure;
10 [013] FIG. 5 is a flow diagram of an exemplary process for performing non-destructive
sanitization on sensitive data, in accordance with an exemplary embodiment of the present
disclosure;
[014] FIGS. 6A – 6B illustrate a functional block diagram of an exemplary process for
sanitization of sensitive data, in accordance with an exemplary embodiment of the present
15 disclosure;
[015] FIG. 7 is a flow diagram of an exemplary process for determining risk profiles
associated with one or more sensitive data, in accordance with an exemplary embodiment of the
present disclosure;
[016] FIG. 8 is a flow diagram of an exemplary process for performing an exact
20 matching analysis, in accordance with an exemplary embodiment of the present disclosure;
[017] FIG. 9 is a flow diagram of an exemplary process for performing a similarity
matching analysis, in accordance with an exemplary embodiment of the present disclosure;
[018] FIG. 10 is a flow diagram of an exemplary process for performing a probability
score analysis, in accordance with an exemplary embodiment of the present disclosure;
25 [019] FIGS. 11A – 11B illustrate a functional block diagram of an exemplary process
for risk classification, in accordance with an exemplary embodiment of the present disclosure;
[020] FIG. 12 is a block diagram of an exemplary process for sanitization of sensitive
data without risk classification, in accordance with an exemplary embodiment of the present
disclosure;
30 [021] FIG. 13 is a block diagram of an exemplary process for de-sanitization of nondestructive sensitive data, in accordance with an exemplary embodiment of the present disclosure;
[022] FIG. 14A illustrates an exemplary checkout page of an e-commerce application,
in accordance with an exemplary embodiment of the present disclosure; FIG. 14B illustrates an exemplary checkout page of an e-commerce application
with classification of high risk data, in accordance with an exemplary embodiment of the present
disclosure; and
[024] FIG. 14C is an exemplary checkout page of an e-commerce application with
sanitized data, in accordance with an exemplary embodiment of the present disclosure.
10 DETAILED DESCRIPTION
[025] Exemplary embodiments are described with reference to the accompanying
drawings. Wherever convenient, the same reference numbers are used throughout the drawings to
refer to the same or like parts. While examples and features of disclosed principles are described
herein, modifications, adaptations, and other implementations are possible without departing from
15 the spirit and scope of the disclosed embodiments. It is intended that the following detailed
description be considered as exemplary only, with the true scope and spirit being indicated by the
following claims.
[026] FIG. 1 is a diagram that illustrates an environment of a system 100 for sanitization
of sensitive data, in accordance with an exemplary embodiment of the present disclosure. The
20 environment 100 may include a computing device 101 and a user device 102. The computing
device 101 and the user device 102 may be communicatively coupled with each other via a
communication network 103. Examples of the communication network 104 may include, but are
not limited to, a wireless fidelity (Wi-Fi) network, a light fidelity (Li-Fi) network, a local area
network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a satellite
25 network, the Internet, a fiber optic network, a coaxial cable network, an infrared (IR) network, a
radio frequency (RF) network, and a combination thereof.
[027] The computing device 101 may be responsible for sanitization of sensitive data.
In particular, the computing device 101 may be configured to perform one of a destructive
sanitization or a non-destructive sanitization on the sensitive data (preferably on a high risk
30 sensitive data). It is to be noted that the selection between the destructive sanitization and the nondestructive sanitization may be based on contextual factors, risk profiles, and data sensitivity
evaluations.
[028] In the destructive sanitization approach, synthetic data corresponding to the
sensitive data may be generated. The generated synthetic data is carefully designed to mimic the
35 original sensitive data’s data type, format, and structure. Subsequently, the original sensitive data may be replaced with the synthetic data. This approach ensures that a functionality remains
unharmed while protecting actual sensitive data from exposure.
[029] Additionally, in the non-destructive sanitization approach, the sensitive data may
be encrypted using a secure encryption key. Once encrypted, synthetic data may be generated to
replace the encrypted sensitive data. Similar to the destructive approach, this technique maintains
10 the structure and integrity while safeguarding the sensitive data through encryption.
[030] To initiate the data sanitization process, the communication network 103 may
facilitate the computing device 101 in accessing a specific page, which may be a web page, from
an application residing in the user device 102. Examples of the user device 102 may include a
smartphone, a tablet, a laptop, a desktop, a notebook, a mobile phone, an application server, or the
15 like. The accessed page within the application may serve as a starting point for data sanitization
operations.
[031] The accessed page may take various forms, such as, but not limited to, a
homepage, a product details page, a login page, or a checkout page of the application. The
application may correspond to diverse domains including but not limited to, retail, e-commerce,
20 online advertising, social media, telecommunications, insurance, automotive industry, financial
services, travel, transportation, logistics, real estate, public and social sectors, sports, energy,
mining, healthcare, education, or consumer packaged goods.
[032] The computing device 101 may access the page of the application to analyze one
or more sensitive data present within the page. The one or more sensitive data may include a variety
25 of elements, such as, but not limited to, variable names, variable parameters, variable contents, file
names, and text-based file present within the page. The analysis may be performed using a
deterministic algorithm. The deterministic algorithm may include a predetermined set of rules and
instructions designed to identify specific patterns and attributes within the data. In a more
elaborative way, the deterministic algorithm plays essential role in identifying sensitive data
30 elements within the page. It is responsible for recognizing patterns, variable names, parameters,
and other text-based content that may indicate the presence of sensitive information. The output
of the deterministic algorithm may serve as input for the subsequent classification step, where
machine learning models take over to determine the risk profile.
[033] By way of an example, if the accessed page is a product details page in an e35 commerce application, the one or more sensitive data may include product names, customer names, transaction details, or other pieces of information that require protection. In this scenario,
the deterministic algorithm may systematically scan and identify these sensitive data within the
page, forming the basis for further sanitization actions. The sensitive data analysis carried out by
the computing device 101 serves as the initial step in the data sanitization process.
[034] Once the one or more sensitive data is analysed within the accessed page, the
10 computing device 101 may further perform classification of the one or more sensitive data. The
classification may be performed based on a Machine Learning (ML) classification algorithm that
may segregate the one or more sensitive data into categories of a high risk and a low risk.
[035] For the low risk sensitive data, no action may be taken. In the scenario where the
one or more sensitive data is classified as high risk sensitive data, the sanitization process may
15 proceed to the next stage. Here, the computing device 101 may offer two sanitization approaches:
a destructive sanitization or a non-destructive sanitization. The choice between these two
approaches may depend on several factors, including a nature of the sensitive data and a desired
level of data protection.
[036] For instances where the high risk sensitive data is selected for destructive
20 sanitization, the computing device 101 may generate synthetic data. The synthetic data may
replicate the characteristics of the original sensitive data. This synthetic data may be then
seamlessly substituted in place of original high risk sensitive data within the page.
[037] Alternatively, in scenarios involving non-destructive sanitization of the high risk
sensitive data, the computing device 101 may employ encryption as a protecting measure. The
25 high risk sensitive data may be encrypted using an encryption key, and synthetic data may be
generated to match an encrypted format. Subsequently, the original high risk sensitive data is
replaced with the encrypted synthetic data within the page. This approach maintains data
confidentiality without altering the application's operational aspects.
[038] The result of these sanitization actions may be the page that either include
30 synthetic data or encrypted synthetic data, depending on the selected approach. This sanitized page
may be then rendered to a user, thereby ensuring that the high risk sensitive data remains secure
while delivering an optimal user experience.
[039] For a sake of explanation, consider a scenario of an e-commerce application that
handles customer orders. Within this application, there is a page that displays the order history of
35 each customer. This order history may include sensitive information such as customer names,addresses, and order details. To ensure data privacy, the application may employ the sanitization
process described earlier.
[040] Suppose the deterministic algorithm identifies a piece of sensitive data on this
page, which is a customer's full name: “John Smith”. This sensitive data may be classified as high
risk due to its potential impact if exposed. The computing device 102 needs to determine whether
10 to perform destructive or non-destructive sanitization.
[041] In this case, let's consider the non-destructive sanitization approach. The
computing device 102 generates synthetic data that replicates the characteristics of the original
sensitive data. In this context, the synthetic data may be a fabricated name like “Jane Doe”. The
synthetic data maintains the same data type (text), format (name), and structure (first name and
15 last name) as the original sensitive data.
[042] So, when the user accesses the sanitized page, instead of seeing the real customer
name “John Smith”, they may see the synthetic name “Jane Doe” in its place. This ensures that the
user experience remains unaffected, as the application functions normally while safeguarding the
actual sensitive data.
20 [043] FIG. 2 is a block diagram of a computing device 101 for sanitization of sensitive
data, in accordance with an exemplary embodiment of the present disclosure. FIG. 2 is explained
in conjunction with elements from FIG. 1. The computing device 101 may include a processing
circuitry 201 and a memory 202 communicatively coupled to the processing circuitry 201 via a
communication bus 203. The memory 202 may be a non-volatile memory or a volatile memory.
25 Examples of non-volatile memory may include, but are not limited to, a flash memory, a Read
Only Memory (ROM), a Programmable ROM (PROM), Erasable PROM (EPROM), and
Electrically EPROM (EEPROM) memory. Examples of volatile memory may include, but are not
limited to, Dynamic Random Access Memory (DRAM), and Static Random-Access Memory
(SRAM).
30 [044] The memory 202 may store processor instructions. The processor instructions,
when executed by the processing circuitry 201, may cause the processing circuitry 201 to
implement one or more embodiments of the present disclosure such as, but not limited to, analyze
one or more sensitive data, classify the one or more sensitive data into a high risk sensitive data
and a low risk sensitive data, and perform a destructive sanitization or a non-destructive
35 sanitization on each of the high risk sensitive data. The memory 202 may also store various data
9
5 (e.g., one or more sensitive data within the page, ML model parameters, sanitization data, synthetic
data, data associated with predefined patterns, contextual information for ML models, etc.) that
may be captured, processed, and/or required by the processing circuitry 201 of the computing
device 101 to provide sanitization of the sensitive data. The memory 202 may include various
modules i.e., an analyzation module 204, a classification module 205, a destructive sanitization
10 module 206, a non-destructive sanitization module 207, and a database 208, that enables the
computing device 101 to perform sanitization of the sensitive data.
[045] In order to perform sanitization of the sensitive data, initially, the analyzation
module 204 may analyze contents of a page within an application based on a deterministic
algorithm. More specifically, by employing the deterministic algorithm, the analyzation module
15 204 may identify and extract one or more sensitive data, such as variable names, parameters,
contents, file names, and text-based file content from the page that may indicate the presence of
sensitive information.
[046] The classification module 205 may be configured to classify a risk associated
with each of the one or more sensitive data. The classification module 205 may employ a ML
20 classification algorithm to determine risk profile of the one or more sensitive data. In particular,
the ML classification algorithm may include a contextual-based ML model and a pattern matching
ML model that may consider various factors, such as exact matching, similarity matching, and
probability scores to categorize the one or more sensitive data into a low risk sensitive data and a
high risk sensitive data.
25 [047] To further elaborate, the contextual based ML model may be designed to be
trained on a dataset of text-based predefined data that is directly associated with the sensitive data
being analyzed. This dataset may include contextual information that may help the ML model to
understand and recognize patterns within the sensitive data. By learning from this dataset, the
contextual-based ML model gains the ability to identify and classify sensitive data based on its
30 contextual usage and relevance.
[048] Besides the contextual based ML model, the pattern matching ML model also
plays an important role in the classification process. This model may be trained on a dataset of
predefined patterns that are linked to the sensitive data. These predefined patterns may capture
common structures, formats, or characteristics of sensitive data that the model may recognize. By learning from this dataset, the pattern matching ML model becomes capable to identify sensitive
data through pattern recognition techniques.
[049] Once the pattern matching ML model is trained, it may perform an exact matching
analysis and a similarity matching analysis. These analyses are part of the classification process,
where the ML models assess the similarity between the sensitive data and the predefined patterns.
10 [050] In the exact matching analysis, the one or more sensitive data may be compared
with the dataset of predefined patterns to determine a similarity score for each of the one or more
sensitive data. The goal is to determine if there is an exact match between the one or more sensitive
data and the predefined pattern. Upon comparison, if the similarity score is determined to be 1, it
means that the sensitive data exactly matches a predefined pattern. Consequently, the one or more
15 sensitive data may be tagged as “exactly matched sensitive data”.
[051] In similarity matching analysis, a comparison may be made between the one or
more sensitive data and the dataset of predefined patterns to determine a similarity score for each
of the one or more sensitive data. However, this time the focus is on assessing a degree of
similarity. If the similarity score is less than 1, it indicates that there is some degree of similarity
20 between the one or more sensitive data and a predefined pattern. As a result, the sensitive data may
be tagged as “similar matched sensitive data”.
[052] After performing the exact matching analysis and the similarity matching
analysis, a probability score analysis may be performed to determine a risk associated with each
of the one or more sensitive data. In probability score analysis, a probability score may be
25 determined of the one or more sensitive data. This probability score represents a likelihood of the
sensitive data being high risk or low risk. If the probability score is determined to be greater than
a predefined threshold, the one or more sensitive data is tagged as “high risk sensitive data”.
Conversely, if the probability score is determined to be less than the predefined threshold, the one
or more sensitive data may be tagged as “low risk sensitive data”.
30 [053] The destructive sanitization module 206 comes into play when high risk sensitive
data is identified. In the case of destructive sanitization, it generates synthetic data that replicates
the characteristics of the original sensitive data. This synthetic data is then used to replace the
original high risk sensitive data within the page.
[054] Similar to the destructive sanitization module 206, the non-destructive
35 sanitization module 207 may operate when high risk sensitive data is identified. However, instead of replacing the data, the non-destructive sanitization module 207 may first encrypt the high risk
sensitive data using an encryption key and generate corresponding synthetic data. The encrypted
sensitive data may be then replaced with the synthetic data within the page.
[055] The database 208 may store a variety of data, including, but not limited to,
sanitization data, predefined patterns, ML model parameters, contextual information for ML
10 model, and other relevant information. This stored information may help the other modules in
making informed decisions during the sanitization process.
[056] It should be noted that all such aforementioned models 204 – 207 may be
represented as a single engine or a combination of different engines. Further, as will be appreciated
by those skilled in the art, each of the engines 204 – 207 may reside, in whole or in parts, on one
15 device or multiple devices in communication with each other. In some embodiments, each of the
engines 204 – 207 may be implemented as dedicated hardware circuit comprising custom
application-specific integrated circuit (ASIC) or gate arrays, off-the-shelf semiconductors such as
logic chips, transistors, or other discrete components. Each of the engines 204 – 207 may also be
implemented in a programmable hardware device such as a field programmable gate array
20 (FPGA), programmable array logic, programmable logic device, and so forth. Alternatively, each
of the engines 204 – 207 may be implemented in software for execution by various types of
processors (e.g., the processing circuitry 201). An identified module of executable code may, for
instance, include one or more physical or logical blocks of computer instructions, which may, for
instance, be organized as an object, procedure, function, or other construct. Nevertheless, the
25 executables of an identified module or component need not be physically located together, but may
include disparate instructions stored in different locations which, when joined logically together,
include the module and achieve the stated purpose of the module. Indeed, a module of executable
code could be a single instruction, or many instructions, and may even be distributed over several
different code segments, among different applications, and across several memory devices.
30 [057] As will be appreciated by one skilled in the art, a variety of processes may be
employed for sanitization of sensitive data. For example, the exemplary computing device 101
may provide for sanitization of the sensitive data by the processes discussed herein. In particular,
as will be appreciated by those of ordinary skill in the art, control logic and/or automated routines
for performing the techniques and steps described herein may be implemented by the computing
35 server 101 either by hardware, software, or combinations of hardware and software. For example, suitable code may be accessed and executed by the one or more processors on the computing
device 101 to perform some or all of the techniques described herein. Similarly, application
specific integrated circuits (ASICs) configured to perform some, or all of the processes described
herein may be included in the one or more processors on the computing device 101.
[058] FIG. 3 is a flow diagram of an exemplary process 300 for sanitization of sensitive
10 data depicted via a flowchart, in accordance with an exemplary embodiment of the present
disclosure. FIG. 3 is explained in conjunction with elements from FIGS. 1 and 2. In an
embodiment, the process 300 may be implemented by the computing device 101.
[059] The process 300 may include analyzing one or more sensitive data present within
a page of an application based on a deterministic algorithm, at step 301. The sensitive data may
15 include various types of information such as variable names, variable parameters, variable
contents, file names, and text-based file content present on the page. The primary purpose of this
analysis is to identify sensitive data that requires protection.
[060] After the initial analysis, the process 300 may further include classifying the one
or more sensitive data into a high risk sensitive data and a low risk sensitive data based on a
20 Machine Learning (ML) classification algorithm, at step 302.
[061] In an embodiment, the ML classification algorithm may utilize a contextual based
ML model and a pattern matching ML model for classifying the one or more sensitive data. The
contextual based ML model may be trained on a dataset of text-based predefined data associated
with the one or more sensitive data. It may use the contextual information to make classification
25 decisions.
[062] Additionally, the pattern matching ML model may be trained on a dataset of
predefined patterns linked to the sensitive data. It may focus on identifying specific patterns in the
data.
[063] For the one or more sensitive data classified as the high risk sensitive data, the
30 process 300 may further include performing a destructive sanitization on each of the high risk
sensitive data, at step 303. In this step, synthetic data that replicates the characteristics of the
original sensitive data may be generated. This ensures that the high risk sensitive data is eliminated
from the page while maintaining its structure and format. A method of performing destructive
sanitization is further explained in conjunction with [064] For the one or more sensitive data classified as the high risk sensitive data, the
process 300 may further include performing a non-destructive sanitization on each of the high risk
sensitive data, at step 304. In this case, the high risk sensitive data is encrypted using an encryption
key. This method allows for data protection without altering the original format. A method of
performing non-destructive sanitization is further explained in conjunction with FIG. 5.
10 [065] FIG. 4 is a flow diagram of an exemplary process 400 for performing destructive
sanitization on sensitive data depicted via a flowchart, in accordance with an exemplary
embodiment of the present disclosure. FIG. 4 is explained in conjunction with elements from FIGS.
1, 2, and 3. In an embodiment, the process 400 may be implemented by the computing device 101.
[066] As explained earlier with reference to FIG. 3, for the one or more sensitive data
15 classified as the high risk sensitive data, a destructive sanitization may be performed on each of
the high risk sensitive data, at step 303.
[067] Therefore, to perform destructive sanitization, the process 400 may include
generating synthetic data corresponding to the high risk sensitive data, at step 401. The synthetic
data is artificially created data that replicates the characteristics, format, and structure of the
20 original sensitive data. This synthetic data is designed to closely resemble the high risk sensitive
data it is replacing.
[068] Further, the process 400 may include replacing the high risk sensitive data with
the synthetic data, at step 402. This replacement may occur within the context where the high risk
sensitive data was initially found. The purpose of this step is to ensure that the sensitive data is
25 effectively removed or masked, thereby reducing the risk associated with its exposure.
[069] FIG. 5 is a flow diagram of an exemplary process 500 for performing nondestructive sanitization on sensitive data depicted via a flowchart, in accordance with an exemplary
embodiment of the present disclosure. FIG. 5 is explained in conjunction with elements from FIGS.
1, 2, 3, and 4. In an embodiment, the process 500 may be implemented by the computing device
30 101.
[070] As explained earlier with reference to FIG. 3, for the one or more sensitive data
classified as the high risk sensitive data, a non-destructive sanitization may be performed on each
of the high risk sensitive data, at step 304. Therefore, to perform non-destructive sanitization, the
process 500 may include encrypting the high risk sensitive data through an encryption key, at step
35 501. This encryption is a protective measure to ensure that the data remains confidential even during the sanitization process. It includes encoding the high risk sensitive data in a way that may
only be decrypted by someone with the appropriate encryption key.
[071] Further, the process 500 may include generating synthetic data corresponding to
the high risk sensitive data, at step 502. This synthetic data may be designed to closely mimic the
original sensitive data in terms of data type, format, and structure. It may be created to serve as a
10 substitute for the high risk sensitive data.
[072] Further, the process 500 may include replacing the high risk sensitive data
encrypted within the page with the synthetic data, at step 503. Importantly, this replacement occurs
within the page where the original sensitive data is located. This ensures that the high risk sensitive
data is effectively concealed, and the user interacts with synthetic data rather than the original
15 sensitive data or content.
[073] FIGS. 6A – 6B illustrate a functional block diagram of an exemplary process 600
for sanitization of sensitive data, in accordance with an exemplary embodiment of the present
disclosure. FIG. 6 is explained in conjunction with elements from FIGS. 1, 2, 3, 4, and 5. The
process begins with an initialization step at block 601. This is the starting point where the
20 sanitization process is initiated.
[074] Next, at block 602, the code or component undergoes processing through a
deterministic algorithm. At block 603, the deterministic algorithm identify sensitive parameters or
data within the code or component, which may include various elements like variable names,
variable parameters, variable contents, file names, and text-based file content. These elements may
25 represent the information that needs to be evaluated for risk.
[075] Once the sensitive parameters or data have been identified, they are subjected to
a risk classification algorithm at block 604. The risk classification algorithm is responsible for
categorizing the identified sensitive parameters or data into different risk levels. Specifically, it
classifies them as high risk, low risk, or as unidentified data. The high risk sensitive parameters or
30 data may follow specific patterns, such as “Infosys.com”, “Infosys_hero_image”, etc. While the
low-risk sensitive parameters or data may have patterns like “var17_a1”, etc.
[076] For data classified as low risk, no further action is taken in block 605. This
indicates that low risk data is left untouched and retained in its original form. For the unidentified
data, human intervention may be required for the classification. This is explained in greater detail
35 in conjunction with [077] When high-risk sensitive parameters or data are identified, the process presents
two options: destructive sanitization and non-destructive sanitization. In the destructive
sanitization process, synthetic data is initially generated to replicate the characteristics of original
high risk sensitive data, at block 606. Subsequently, this synthetic data is used to replace the
original high risk sensitive data within the code or component, at block 607.
10 [078] In the non-destructive sanitization process, several steps are involved. Initially,
synthetic data is generated, at block 608, and an encryption key is created, at block 609. The
encryption key 610 may be stored in an encryption key database 611 for later references.
[079] Further, each high risk sensitive data, for example, target data is individually
encapsulated and encrypted using the encryption key, at block 612. Simultaneously, an in-line
15 target map is generated, at block 613. This map is a record or reference that keeps track of the high
risk sensitive data that has been encrypted and replaced with synthetic data within the code or
component.
[080] In particular, the target map is a sort of roadmap or guide that notes which portions
of the code or component have undergone encryption and substitution. It may help to maintain a
20 clear link between the original high risk sensitive data and the corresponding synthetic data. This
map may be created to ensure that, at a later stage, the system may efficiently and accurately
reverse the process if necessary or maintain a record of the changes made.
[081] At block 614, the encrypted data is replaced with the synthetic data, while linelevel metadata obtained from block 613 is inserted. Line-level metadata can include additional
25 information or markers associated with specific lines or sections of the code or component. The
metadata may serve various purposes, such as providing context or additional details about the
encrypted and replaced data. It may help in tracking and understanding what changes were made
to the code or component during the sanitization process.
[082] Particularly, the insertion of line-level metadata may ensure that the changes made
30 to the code or component are well documented and that there is a clear reference to the original
data that are replaced. This documentation is valuable for both reviewing purposes and for
maintaining transparency in the code sanitization process.
[083] Finally, the block 615 signifies the completion of the code or component
sanitization process. At this stage, the code or component has undergone the necessary sanitization
35 measures, and it is now ready for use, and the process stops at block 616.[084] FIG. 7 is a flow diagram of an exemplary process 700 for determining risk
profiles associated with one or more sensitive data depicted via a flowchart, in accordance with an
exemplary embodiment of the present disclosure. FIG. 7 is explained in conjunction with elements
from FIGS. 1, 2, 3, 4, 5, and 6. In an embodiment, the process 700 may be implemented by the
computing device 101. In order to determine risk profiles associated with the one or more sensitive
10 data, the process 700 may perform exact matching, similarity matching and probability matching
analyses.
[085] At step 701, the process 700 may include performing an exact matching analysis
to determine a similarity for each of the one or more sensitive data present within the page. In a
more elaborative way, the exact matching analysis may be used to precisely identify sensitive data
15 that matches predefined patterns or criteria. When data matches these patterns exactly, it is
considered as high risk because it’s a direct match to known sensitive information.
[086] For instance, if a variable name exactly matches a pattern associated with social
security numbers (e.g., “SSN_123456789”), it is crucial to identify it as a high risk sensitive data
immediately.
20 [087] At step 702, the process 700 may further include performing a similarity matching
analysis to determine a similarity for each of the one or more sensitive data present within the
page. The sensitive data may not always match predefined patterns exactly. There may be slight
variations, or deviations from the expected format. Therefore, to detect such variations, the
similarity matching analysis may be used.
25 [088] By way of an example, consider a variable that include a social security number,
but is labeled as “SocSecNum” instead of “SSN.” The similarity matching may detect this as a
high risk data because it closely resembles the expected format.
[089] At step 703, the process 700 may further include performing a probability score
analysis to determine a risk associated with each of the one or more sensitive data. The probability
30 score analysis plays an important role in classifying the one or more sensitive data into a high risk
or low risk. This is further explained in conjunction with FIG. 10.
[090] FIG. 8 is a flow diagram of an exemplary process 800 for performing an exact
matching analysis depicted via a flowchart, in accordance with an exemplary embodiment of the
present disclosure. FIG. 8 is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6,
35 and 7. In an embodiment, the process 800 may be implemented by the computing device 101. [091] In order to perform exact matching analysis, initially, at step 801, the process 800
may include comparing the one or more sensitive data with the dataset of predefined patterns. This
dataset, also known as the “pattern repository,” may act as a reference including specific patterns
or templates against which the sensitive data is compared. This initial comparison may serve as
the foundation for further analysis.
10 [092] At step 802, the process 800 may further include determining a similarity score
for each of the one or more sensitive data based on comparing. This similarity score may be
determined based on the extent of resemblance between the sensitive data and the predefined
patterns in the dataset. It may quantify how closely the sensitive data aligns with these patterns.
[093] Finally, at step 803, the process 800 may evaluate the similarity score determined
15 in the previous step. If the similarity score is determined to be exactly 1, it may signify that the
sensitive data is an exact match with one of the predefined patterns in the dataset. In this case, the
process 800 may tag the one or more sensitive data points as “exactly matched sensitive data”.
[094] FIG. 9 is a flow diagram of an exemplary process 900 for performing a similarity
matching analysis depicted via a flowchart, in accordance with an exemplary embodiment of the
20 present disclosure. FIG. 9 is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6, 7,
and 8. In an embodiment, the process 900 may be implemented by the computing device 101.
[095] In order to perform similarity matching analysis, initially, at step 901, the process
900 may include comparing the one or more sensitive data with the dataset of predefined patterns.
At step 902, the process 900 may further include determining a similarity score for each of the one
25 or more sensitive data based on comparing. This similarity score may quantify how closely the
sensitive data aligns with the predefined patterns stored in the dataset.
[096] Finally, at step 903, the process 900 may evaluate the determined similarity score.
In particular, if the similarity score falls below the value of 1 (indicating that the sensitive data has
some degree of similarity with the predefined patterns), the process 900 may proceed to tag the
30 one or more sensitive data as “similar matched sensitive data”.
[097] FIG. 10 is a flow diagram of an exemplary process 1000 for performing a
probability score analysis depicted via a flowchart, in accordance with an exemplary embodiment
of the present disclosure. FIG. 10 is explained in conjunction with elements from FIGS. 1, 2, 3, 4,
5, 6, 7, 8, and 9. In an embodiment, the process 1000 may be implemented by the computing device
[098] In order to perform probability score analysis, initially, at step 1001, the process
1000 may include determining a probability score for each of the one or more sensitive data. The
probability score may be determined by a ML model. The ML model may be trained on a dataset
of known sensitive data and its corresponding risk levels. The ML model learns to associate certain
features of the sensitive data with its risk level.
10 [099] When a new piece of data is encountered, the ML model may analyze its features
and determines a probability score for that data. The probability score is a measure of how likely
the data is to be sensitive.
[0100] The probability score may serve as an essential basis for subsequent decisionmaking steps in the sanitization process. Specifically, it determines whether a particular data point
15 may be categorized as high risk or low risk, which in turn informs the sanitization approach to be
applied to that data.
[0101] At step 1002, the process 1000 may further include tagging the one or more
sensitive data as the high risk sensitive data if the probability score is determined to be greater than
a predefined threshold. This means the process 1000 has determined that this data is likely to be
20 sensitive and may be treated with caution.
[0102] At step 1003, the process 1000 may further include tagging the one or more
sensitive data as the low risk sensitive data if the probability score is determined to be less than
the predefined threshold. This indicates that the data is less likely to be sensitive or poses a lower
risk to security or privacy.
25 [0103] FIGS. 11A – 11B illustrate a functional block diagram of an exemplary process
1100 for risk classification, in accordance with an exemplary embodiment of the present
disclosure. FIG. 11 is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9,
and 10. The risk classification process is initiated at block 1101.
[0104] The process 1100 of risk classification may be divided into two branches: branch
30 1 and branch 2. When the process 1100 follow branch 1, the sensitive parameters or data may be
processed in isolation, at step 1102. This means that the sensitive data is processed independently
without considering the context in which it appears.
[0105] For the sensitive parameters or data to be processed in isolation, a trained ML
model (such as, a pattern matching ML model) performs a word or pattern matching analysis, at
35 step 1104. The ML model has learned patterns from a dataset of predefined patterns stored in anML database (ML DB 1105) that are associated with the one or more sensitive data. The pattern
matching analysis may be include exact matching analysis and similarity matching analysis. ML
model prompts of block 1106 may be used to activate the trained ML model for performing the
pattern matching analysis. These prompts are designed to instruct the ML model on what kind of
analysis to conduct on the sensitive data. In addition to the pattern matching, a probability score
10 analysis is performed as explained earlier. It assigns a probability score to each of the one or more
sensitive data to determine risk profile. The process of performing the exact matching analysis,
similarity matching analysis, and the probability analysis is already explained in conjunction with
FIGS. 8, 9, and 10.
[0106] When the process 1100 follow branch 2, the sensitive parameters or data may be
15 processed with consideration of its context, at step 1103. For the sensitive parameters or data to be
processed in context, a contextual analysis may be performed through a contextual based trained
ML model, at block 1107. In the contextual analysis, the contextual-based ML model classifies
the one or more sensitive data according to its contextual significance. This classification helps in
determining the risk profile of the sensitive data. For instance, if the sensitive data is contextually
20 linked to a critical function within the application, it may be classified as high risk.
[0107] To train the contextual-based ML model, a dataset of text-based predefined data
stored in the ML database (ML DB 1108) is used. This dataset may include information related to
the context in which sensitive data appears. To initiate the contextual analysis, ML model prompts
(as mentioned in block 1106) are used. These prompts provide instructions to the contextual-based
25 ML model to consider the context in which sensitive data is found within the application.
[0108] The results from both branch 1 and branch 2 are combined and processed by the
ML classification algorithm, at block 1110. The classification algorithm may determine the risk
profile associated with each of the one or more sensitive data, at block 1111. The risk profile may
be categorized as high risk, low risk, or undefined risk based on the analysis performed. For
30 sensitive data with a clear risk profile (such as high risk or low risk), the process 1100 ends at
block 1112.
[0109] When the risk is undefined, indicating uncertainty, the process initiates a human
decision step, at block 1113. In this case, human intervention is needed to classify the one or more
sensitive data, at block 1114. A human decision is made to determine whether the data is high risk,
35 low risk, or something else. The decision made by humans is fed back into the ML prompts of
20
5 block 1106 and block 1109 using a content prompt. This feedback helps improve the ML models
for future analyses.
[0110] FIG. 12 is a block diagram of an exemplary process 1200 for sanitization of
sensitive data without risk classification, in accordance with an exemplary embodiment of the
present disclosure. FIG. 12 is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6,
10 7, 8, 9, 10, and 11. The process begins with an initialization step at block 1201. This is the starting
point where the sanitization process is initiated.
[0111] Further, at block 1202, the code or component undergoes processing through a
deterministic algorithm. At block 1203, the deterministic algorithm identify sensitive parameters
or data within the code or component, which may include various elements like variable names,
15 variable parameters, variable contents, file names, and text-based file content. These elements may
represent the information that needs to be evaluated for possible risk.
[0112] Once the sensitive parameters or data have been identified, a generative
sanitization process may be initiated, at block 1204. The sanitization process may present two
options: a destructive sanitization and a non-destructive sanitization. It should be noted that
20 irrespective of the process 600 of FIG. 6, the present process 1200 may not consider the risk profile
before performing sanitization process. In particular, the destructive sanitization and the nondestructive sanitization may be performed without classifying the sensitive data into a high risk or
low risk.
[0113] In some situations, the process may not involve classifying sensitive data into high
25 risk or low risk categories before applying sanitization. Instead, it may directly proceed with either
destructive or non-destructive sanitization without a prior risk assessment. This means that the data
is sanitized without first determining whether it is high risk or low risk sensitive data.
[0114] In the destructive sanitization option, synthetic data is initially generated to
replicate the characteristics of original sensitive data, at block 1205. Subsequently, this synthetic
30 data is used to replace the original sensitive data within the code or component, at block 1206.
[0115] In the non-destructive sanitization option, initially, synthetic data is generated, at
block 1207, and an encryption key is created, at block 1208. The encryption key (at block 1209)
may be stored in an encryption key database (at block 1210) for further references.
[0116] Further, each of the sensitive data, for example, target data is individually
35 encapsulated and encrypted using the encryption key, at block 1211. Simultaneously, an in-line
21
5 target map is generated, at block 1212. This map is a record or reference that keeps track of the
sensitive data that has been encrypted and replaced with synthetic data within the code or
component.
[0117] In particular, the target map is a sort of roadmap or guide that notes which portions
of the code or component have undergone encryption and substitution. It may help to maintain a
10 clear link between the original sensitive data and the corresponding synthetic data. This map may
be created to ensure that, at a later stage, the system may efficiently and accurately reverse the
process if necessary or maintain a record of the changes made.
[0118] At block 1213, the encrypted data is replaced with the synthetic data, while linelevel metadata obtained from block 1212 is inserted. Line-level metadata may include additional
15 information or markers associated with specific lines or sections of the code or component. The
metadata may serve various purposes, such as providing context or additional details about the
encrypted and replaced data. It may help in tracking and understanding what changes were made
to the code or component during the sanitization process.
[0119] Particularly, the insertion of line-level metadata may ensure that the changes made
20 to the code or component are well documented and that there is a clear reference to the original
sensitive data that are replaced. This documentation is valuable for both reviewing purposes and
for maintaining transparency in the code sanitization process.
[0120] Finally, the block 1214 signifies the completion of the code or component
sanitization process. At this stage, the code or component has undergone the necessary sanitization
25 measures, and it is now ready for use, and the process stops at block 1215.
[0121] FIG. 13 is a block diagram of an exemplary process 1300 for de-sanitization of
non-destructive sensitive data, in accordance with an exemplary embodiment of the present
disclosure. FIG. 13 is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, and 12. The de-sanitization process is initiated at block 1301. The de-sanitization process
30 begins at this point, where the process 1300 is ready to reverse the non-destructive sanitization that
was previously applied to sensitive data.
[0122] To start the de-sanitization of non-destructive sensitive data, the process 1300
retrieves the encrypted key, at block 1302. This key is essential for decrypting the sensitive data
that was previously secured. The encrypted key 1303 is obtained from an encryption key database
35 (encryption key DB 1304). Simultaneously, the process 1300 combines the retrieved encrypted
22
5 key of block 1302 with a target map of block 1305. The target map essentially contains in-line
metadata associated with the sensitive data to be de-sanitized. This combination is an essential step
for correctly de-sanitizing the data.
[0123] At block 1307, the combined data is then pre-processed at this stage. The preprocessing may include preparing the data for the de-sanitization process. Further, the process
10 1300 identify target data by injecting the line-level metadata retrieved from the target map of block
130. This metadata may help to determine which part of the data were originally sensitive and
needed to be restored.
[0124] Once the target data is identified, it is individually de-encapsulated and decrypted
using the encrypted key, at block 1308. This step essentially reverses the encryption applied during
15 non-destructive sanitization, returning the data to its original form.
[0125] After de-encapsulation and decryption, the synthetic data that was temporarily in
place is now replaced with the sensitive data in its original, decrypted format, at block 1310. With
the sensitive data successfully restored, the de-sanitization process is considered complete, at block
1311.
20 [0126] Finally, the de-sanitized code or component is now ready for use, at block 1312.
It has been effectively restored to its original state and may be utilized in the application or any
other system.
[0127] FIG. 14A is an exemplary checkout page 1400A of an e-commerce application,
in accordance with an exemplary embodiment of the present disclosure. FIG. 14 is explained in
25 conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, and 13. The checkout
page 1400A is where the user enters their payment information and shipping information to
complete their purchase. The page 1400A may include the elements, for example, but not limited
to, customer name, customer address, order details, payment information, shipping information,
and subtotal cost of the items being purchased.
30 [0128] The data sanitization process may identify the sensitive data elements in the
checkout page 1400A, such as a credit card number and expiration date, and then take steps to
protect those elements. The specific steps that are taken may depend on a type of sensitive data
and a risk level of the data.
[0129] For example, if the sensitive data is the credit card number, the data sanitization
35 process may encrypt the credit card number or replace it with a synthetic number. If the sensitive
23
5 data is a password, then the data sanitization process may hash the password or replace it with a
random string of characters.
[0130] The data sanitization process may be applied to all of the elements as shown in
the checkout page 1400A. This may ensure that all of the sensitive data in the checkout web page
1400A is protected. This is further explained in conjunction with FIG. 14B.
10 [0131] FIG. 14B is an exemplary checkout page 1400B of an e-commerce application
with classification of high risk data, in accordance with an exemplary embodiment of the present
disclosure. FIG. 14B is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9,
10, 11, 12, 13, and 14A. In this scenario of checkout page 1400B, the classification algorithm is
actively applied to identify high-risk sensitive data elements within the page 1400B.
15 [0132] For example, the classification algorithm may tag certain elements within the page
1400B as high-risk sensitive data. As illustrated in present FIG. 14B, the order date “29 JUNE
2023”, and the customer’s name “Seb Weston-Lewis” are tagged as the high risk sensitive data.
Similarly, other elements like the address of the customer “1000 Collins Ave”, contact number
“7981634015” credit card number “**** **** **** 9925”, and the like are also tagged as the high
20 risk sensitive data.
[0133] Therefore, the sanitization process may be employed on these high risk sensitive
data where these high risk sensitive data is replaced by synthetic data. This ensures that the most
sensitive information on the checkout page is thoroughly protected and secure.
[0134] FIG. 14C is an exemplary checkout page 1400C of an e-commerce application
25 with sanitized data, in accordance with an exemplary embodiment of the present disclosure. FIG.
14C is explained in conjunction with elements from FIGS. 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14A, and 14B. At this stage, the data sanitization process is complete, and the rendered checkout
page 1400C is now presented to the user. In this final stage, the original data within the page has
been replaced with corresponding synthetized data. The generated synthetic data resembles to the
30 one or more sensitive data of the page in terms of data type, format, and structure.
[0135] For example, as illustrated in present FIG. 14C, the original order date “29 JUNE
2023”, is replaced with synthetically generated date “31 March 2023” and the original customer’s
name “Seb Weston-Lewis” is replaced with synthetically generated customer name “Jon Doe”.
Similarly, other elements like the original address of the customer “1000 Collins Ave” is replaced
35 with synthetically generated address “1234 Belmond Ave”, original contact number
24
5 “7981634015” is replaced with synthetically generated contact number “72654653426”, original
credit card number “**** **** **** 9925”, is replaced with synthetically generated credit card
number “**** **** **** 1234” and so on. For a clear illustration, consider this example: In FIG.
14C, the original order date “29 JUNE 2023” has been securely substituted with synthetically
generated data, now showing “31 March 2023”. Similarly, the original customer's name, “Seb
10 Weston-Lewis”, has been replaced with synthetically generated data, now displaying “Jon Doe”.
This process is consistent across various elements on the page. The original customer address
“1000 Collins Ave” is now synthetically generated as “1234 Belmond Ave”, and the original
contact number “7981634015” has been replaced with synthetically generated “72654653426”.
Even the original credit card number “**** **** **** 9925” is now represented as “**** ****
15 **** 1234”.
[0136] In essence, this final checkout page, 1400C, demonstrates the successful
application of the data sanitization process. Sensitive information has been effectively protected
by replacing it with synthetic data, ensuring that the user may interact with the application securely
while safeguarding their confidential information.
20 [0137] As will be also appreciated, the above-described techniques may take the form of
computer or controller implemented processes and apparatuses for practicing those processes. The
disclosure can also be embodied in the form of computer program code containing instructions
embodied in tangible media, such as floppy diskettes, solid state drives, CD-ROMs, hard drives,
or any other computer-readable storage medium, wherein, when the computer program code is
25 loaded into and executed by a computer or controller, the computer becomes an apparatus for
practicing the invention. The disclosure may also be embodied in the form of computer program
code or signal, for example, whether stored in a storage medium, loaded into and/or executed by a
computer or controller, or transmitted over some transmission medium, such as over electrical
wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the
30 computer program code is loaded into and executed by a computer, the computer becomes an
apparatus for practicing the invention. When implemented on a general-purpose microprocessor,
the computer program code segments configure the microprocessor to create specific logic circuits.
[0138] Thus, the disclosed method and system try to overcome the technical problem of
sanitization of sensitive data by first analyzing one or more sensitive data present within a page of
35 an application based on a deterministic algorithm. Further, the method and system further classify
25
5 the one or more sensitive data into a high risk sensitive data and a low risk sensitive data based on
a ML classification algorithm. For the one or more sensitive data classified as the high risk
sensitive data, the method and system may perform a destructive sanitization on each of the high
risk sensitive data. The destructive sanitization may include generating synthetic data
corresponding to the high risk sensitive data and replacing the high risk sensitive data with the
10 synthetic data. For the one or more sensitive data classified as the high risk sensitive data, the
method and system may further perform a non-destructive sanitization on each of the high risk
sensitive data. The non-destructive sanitization may include encrypting the high risk sensitive data
through an encryption key, generating synthetic data corresponding to the high risk sensitive data,
and replacing the high risk sensitive data encrypted within the page with the synthetic data.
15 [0139] As will be appreciated by those skilled in the art, the techniques described in the
various embodiments discussed above are not routine, or conventional, or well understood in the
art. The techniques discussed above provide various advantages that significantly enhance data
security and privacy in the context of sensitive information handling within applications,
particularly in scenarios such as e-commerce transactions and data processing.
20 [0140] First and foremost, these techniques enable a proactive approach to data
sanitization, offering a multi-faceted strategy to identify and protect sensitive data dynamically.
By employing deterministic algorithms and advanced ML classification algorithms, this approach
not only accurately identifies sensitive data but also assesses its risk profile, distinguishing
between high-risk and low-risk data.
25 [0141] Furthermore, the ability to perform both destructive and non-destructive
sanitization processes ensure flexibility in data protection. High-risk sensitive data may be
replaced with synthetic data or securely encrypted, thereby minimizing the risk of data exposure
without compromising functionality.
[0142] The incorporation of exact matching, similarity matching, and probability score
30 analysis, facilitated by contextual-based and pattern matching ML models, further refines the risk
assessment process. This enables real-time decision-making regarding the protection of sensitive
data, ensuring that the most appropriate and effective sanitization methods are applied.
[0143] These techniques also offer scalability and adaptability, making them suitable for
various sectors, including retail, finance, healthcare, and more. They may be seamlessly integrated
35 into web and mobile applications, enhancing the overall security posture of these platforms.
26
5 Further, the techniques discussed above may be applied in multiple sectors, for example, retail
sector, E-commerce sector, online advertising sector, social media sector, telecommunications
sector, insurance sector, automotive industry, financial services, travel sector, transportation
sector, logistics sector, real estate sector, public and social sector, sports sector, energy sector,
mining sector, healthcare sector, education sector, or consumer packaged goods sector. Moreover,
10 the techniques discussed above may be implemented on one of a consumer website, an enterprise
website, a consumer web application, an enterprise web application, or an instore display
application to provide decision experience, transactional experience, educational experience,
browsing and consumption experience, or assistive experience.
[0144] In light of the above-mentioned advantages and the technical advancements
15 provided by the disclosed method and system, the claimed steps as discussed above are not routine,
conventional, or well understood in the art, as the claimed steps enable the following solutions to
the existing problems in conventional technologies. Further, the claimed steps clearly bring an
improvement in the functioning of the device itself as the claimed steps provide a technical solution
to a technical problem.
20 [0145] The specification has described method and system for sanitization of sensitive
data. The illustrated steps are set out to explain the exemplary embodiments shown, and it should
be anticipated that ongoing technological development will change the manner in which particular
functions are performed. These examples are presented herein for purposes of illustration, and not
limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined
25 herein for the convenience of the description. Alternative boundaries can be defined so long as the
specified functions and relationships thereof are appropriately performed. Alternatives (including
equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to
persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall
within the scope and spirit of the disclosed embodiments.
30 [0146] Furthermore, one or more computer-readable storage media may be utilized in
implementing embodiments consistent with the present disclosure. A computer-readable storage
medium refers to any type of physical memory on which information or data readable by a
processor may be stored. Thus, a computer-readable storage medium may store instructions for
execution by one or more processors, including instructions for causing the processor(s) to perform
35 steps or stages consistent with the embodiments described herein. The term “computer-readable
27
5 medium” should be understood to include tangible items and exclude carrier waves and transient
signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only
memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash
drives, disks, and any other known physical storage media.
[0147] It is intended that the disclosure and examples be considered as exemplary only,
10 with a true scope and spirit of disclosed embodiments being indicated by the following claims.
| # | Name | Date |
|---|---|---|
| 1 | 202341064189-STATEMENT OF UNDERTAKING (FORM 3) [25-09-2023(online)].pdf | 2023-09-25 |
| 2 | 202341064189-REQUEST FOR EXAMINATION (FORM-18) [25-09-2023(online)].pdf | 2023-09-25 |
| 3 | 202341064189-PROOF OF RIGHT [25-09-2023(online)].pdf | 2023-09-25 |
| 4 | 202341064189-POWER OF AUTHORITY [25-09-2023(online)].pdf | 2023-09-25 |
| 5 | 202341064189-FORM 18 [25-09-2023(online)].pdf | 2023-09-25 |
| 6 | 202341064189-FORM 1 [25-09-2023(online)].pdf | 2023-09-25 |
| 7 | 202341064189-DRAWINGS [25-09-2023(online)].pdf | 2023-09-25 |
| 8 | 202341064189-DECLARATION OF INVENTORSHIP (FORM 5) [25-09-2023(online)].pdf | 2023-09-25 |
| 9 | 202341064189-COMPLETE SPECIFICATION [25-09-2023(online)].pdf | 2023-09-25 |
| 10 | 202341064189-FORM 3 [18-01-2024(online)].pdf | 2024-01-18 |
| 11 | 202341064189-Power of Attorney [20-05-2024(online)].pdf | 2024-05-20 |
| 12 | 202341064189-Form 1 (Submitted on date of filing) [20-05-2024(online)].pdf | 2024-05-20 |
| 13 | 202341064189-Covering Letter [20-05-2024(online)].pdf | 2024-05-20 |