Abstract: A system(100) for searching and retrieval of data in digital document from physical document and vice-versa. The present invention includes wearable device(104), and computing device(102).The wearable device(104) includes a camera(106), a networking module(112), microphone(116), and a wearable device processor (114).The camera(106) captures an textual image of the physical book. The computing device processor(118) executes picture identifier computer readable instructions that recognizes text and picture to search the location of the textual image in a digital document version of the same physical book and digital audio file. The computing device processor(118) executes page estimation computer readable instructions that estimates and indicates the page number in physical book. Again the computing device processor(118) executes a computer readable instruction for research that captures phrase and question from segment of the textual images marked by the user and initiate research using internet resources for finding answers to corresponding questions.
The present invention relates to system for searching and retrieval of data in digital document from physical document and vice-versa. More particularly, the present invention relates to the system and method for selective searching and retrieval of data in digital document from physical document and vice-versa.
BACKGROUND OF THE INVENTION
Background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
With the rapid development of the mobile Internet and the rapid spread of mobile devices, cross-platform online education, digital book, online radio, audio book, and online story telling have become more common. The entire Internet industry has real-time courses on quality courses, digital book, online radio, audio book, and online story telling. The physical book and class seem irrelevant these days. But physical book has its own aura, essence of reading it physical. Also, there is case where person cannot carry book then in that case he can read with the digital book on smartphone and tablet. Often, it is necessary to shift from physical book to digital book in and vice-versa in between the reading. Presently, there are no various ways to shift between physical book to digital book.
US7421155B2 discloses A facility for storing a text capture data structure for a particular user is described. The data structure comprises a number of entries. Each entry corresponds to a text capture operation performed by the user from a rendered document. Each entry contains information specifying the text captured in the text capture operation.
In the prior art, With reference to shift from physical book to digital book and vice-versa in between the reading, the existing invention are unable to solve these problems. The present invention is capable of overcoming all drawbacks of the existing inventions hence there is a need for the present invention.
OBJECTIVE OF THE INVENTION
The main objective of the present invention is to provide system for searching and retrieval of data in digital document from physical document and vice-versa.
Another objective of the present invention is to provide an effective teaching method and system for conducting teaching activities using Internet technologies, especially wireless communication networks.
Yet another objective of the present invention is to a system and method for assisting extraction of content from physical book and search information based on the extracted content.
Yet another objective of the present invention is to provide an immersive learning experience by searching answer of question extracted from the image of physical book.
Yet another objective of the present invention is to provide a system and method for selecting selective content for image of physical book.
Further objectives, advantages, and features of the present invention will become apparent from the detailed description provided hereinbelow, in which various embodiments of the disclosed invention are illustrated by way of example.
SUMMARY OF THE INVENTION
The present invention relates to a system for searching and retrieval of data in digital document from physical document and vice-versa. The present invention includes a wearable device, and a computing device. The wearable device is worn by the user. In an embodiment, the wearable device is selected from a spectacles, helmet, VR head set, smart watches, smartphone. The wearable device includes a camera, a networking module, microphone, and a wearable device processor. The camera is affixed to the wearable device. The camera captures a textual image of the physical book in response to user control. The microphone capture voice-command of the user. The wearable device processor controls all components of the wearable device that are the camera, the microphone and the networking module. The computing device includes a computing device processor and a memory. In an embodiment, the computing device is selected from a computer desktop, laptop, tab, smart phone. The computing device processor segment the textual images captured by the camera from the physical book. The computing device processor executes picture identifier computer readable instructions that recognizes text and picture to search the location of the textual image in a digital document version of the same physical book and digital audio file. The computing device processor executes page estimation computer readable instructions that estimates and indicates the page number in physical book based on the page number in the digital document version of the same physical book. Again, the computing device processor executes a computer readable instruction for research that captures phrase and question from segment of the textual images marked by the user and initiate research using internet resources for finding answers to corresponding questions. The memory is configured to store information from the textual images. The memory stores information of all the commands and gestures performed by the user. The memory stores information from the textual images that are segmented by the computing device processor in response to user control and also stores the picture identifier computer readable instructions, the page estimation
research. Herein, the networking module wirelessly connects to the wearable device and to the computing device. Herein, the wearable device processor recognize the voice-command of the user and send the voice command to the computing device to perform the action given by the user.
The main advantage of the present invention is that present invention provides system for searching and retrieval of data in digital document from physical document and vice-versa.
Another advantage of the present invention is that present invention provides an effective teaching method and system for conducting teaching activities using Internet technologies, especially wireless communication networks.
Yet another advantage of the present invention is that present invention provides a system and method for assisting extraction of content from physical book and search information based on the extracted content.
Yet another advantage of the present invention is that present invention provides an immersive learning experience by searching answer of question extracted from the image of physical book.
Yet another advantage of the present invention is that present invention provides a system and method for selecting selective content for image of physical book.
Further objectives, advantages, and features of the present invention will become apparent from the detailed description provided hereinbelow, in which various embodiments of the disclosed invention are illustrated by way of example.
DETAILED DESCRIPTION OF THE INVENTION
While this invention is susceptible to embodiment in many different forms, there is shown in the drawings and will herein be described in detail specific embodiments, with the understanding that the present disclosure of such embodiments is to be considered as an example of the principles and not intended to limit the invention to the specific embodiments shown and described. In the description below, like reference numerals are used to describe the same, similar or corresponding parts in the several views of the drawings. This detailed description defines the meaning of the terms used herein and specifically describes embodiments in order for those skilled in the art to practice the invention.
Definition
The terms "a" or "an", as used herein, are defined as one or as more than one. The term "plurality", as used herein, is defined as two or more than two. The term "another", as used herein, is defined as at least a second or more. The terms "including" and/or "having", as used herein, are defined as comprising (i.e., open language). The term "coupled", as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. The term "comprising" is not intended to limit inventions to only claiming the present invention with such comprising language. Any invention using the term comprising could be separated into one or more claims using "consisting" or "consisting of claim language and is so intended. The term "comprising" is used interchangeably used by the terms "having" or "containing". Reference throughout this document to "one embodiment", "certain embodiments", "an embodiment", "another embodiment", and "yet another embodiment" or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics are combined in any suitable
used herein is to be interpreted as an inclusive or meaning any one or any combination. Therefore, "A, B or C" means any of the following: "A; B; C; A and B; A and C; B and C; A, B and C". An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
As used herein, the term "one or more" generally refers to, but not limited to, singular as well as the plural form of the term.
The drawings featured in the figures are for the purpose of illustrating certain convenient embodiments of the present invention and are not to be considered as limitation thereto. Term "means" preceding a present participle of an operation indicates a desired function for which there is one or more embodiments, i.e., one or more methods, devices, or apparatuses for achieving the desired function and that one skilled in the art could select from these or their equivalent in view of the disclosure herein and use of the term "means" is not intended to be limiting.
Fig.l illustrates a system(lOO) for searching and retrieval of data in digital document from physical document and vice-versa. The system(lOO) includes wearable device(104), and computing device(102). The wearable device(104) includes a camera(106), a networking module(112), microphone(116), and a wearable device processor (114). The camera(106) is affixed to the wearable device(104). The computing device(102) includes an computing device processor(l 18) and a memory(120). The networking module(l 12) wirelessly connects to the wearable device(104) and to the computing device(102). The present invention relates to a system for searching and retrieval of data in digital document from physical document and vice-versa. The present invention includes wearable device, and computing device. The wearable device is worn by the user. In an embodiment, the wearable device is selected from a spectacle, helmet, VR head set, smart watches, smartphone. The wearable device includes a camera, a networking module, microphone, and a wearable device processor. The camera is affixed to the wearable device. The camera captures an textual image of the physical book in response to user control. The microphone capture voice-command of the user. The wearable
device processor controls all components of the wearable device that are the camera, the microphone and the networking module. The computing device includes an computing device processor and a memory. In an embodiment, the computing device is selected from a computer desktop, laptop, tab, smart phone. The computing device processor segment the textual images captured by the camera from the physical book. The computing device processor executes picture identifier computer readable instructions that recognizes text and picture to search the location of the textual image in a digital document version of the same physical book and digital audio file. The computing device processor executes page estimation computer readable instructions that estimates and indicates the page number in physical book based on the page number in the digital document version of the same physical book. Again, the computing device processor executes a computer readable instruction for research that captures phrase and question from segment of the textual images marked by the user and initiate research using internet resources for finding answers to corresponding questions. The memory is configured to store information from the textual images. The memory stores information of all the commands and gestures performed by the user. The memory stores information from the textual images that are segmented by the computing device processor in response to user control and also stores the picture identifier computer readable instructions, the page estimation computer readable instructions and the computer readable instructions for research. Herein, the networking module wirelessly connects to the wearable device and to the computing device. Herein, the wearable device processor recognize the voice-command of the user and send the voice command to the computing device to perform the action given by the user.
In an embodiment, the computing device processor executes picture identifier computer readable instructions that uses the image processing algorithm to perform segmentation of a particular location marked by the user using a finger, pen, stylus, and circle from the rest of the document and capture the exact location.
In an embodiment of the present invention, a system for searching and retrieval of data in digital document from physical document and vice-versa. The present invention includes one or more wearable devices, and one or more computing devices. The one or more wearable devices are worn by the user. In an embodiment, the wearable device is selected from a spectacles, helmet, VR head set, smart watches, smartphone. The one or more wearable device include a camera, a networking module, one or more microphones, and a wearable device processor . The camera is affixed to the one or more wearable devices. The camera captures an textual image of the physical book in response to user control. The one or more microphones capture voice-command of the user. The wearable device processor controls all components of the one or more wearable devices that are the camera, the one or more microphones and the networking module. The one or more computing devices include an computing device processor and a memory. In an embodiment, the one or more computing devices are including but are not limited to a computer desktop, laptop, tab, smart phone. The computing device processor segment the textual images captured by the camera from the physical book. The computing device processor executes picture identifier computer readable instructions that recognizes text and picture to search the location of the textual image in a digital document version of the same physical book and digital audio file. The computing device processor executes page estimation computer readable instructions that estimates and indicates the page number in physical book based on the page number in the digital document version of the same physical book. Again, the computing device processor executes a computer readable instruction for research that captures phrase and question from segment of the textual images marked by the user and initiate research using internet resources for finding answers to corresponding questions. The memory is configured to store information from the textual images. The memory stores information of all the commands and gestures performed by the user. The memory stores information from the textual images that are segmented by the computing device processor in response to user control and
also stores the picture identifier computer readable instructions, the page estimation computer readable instructions and the computer readable instructions for research. Herein, the networking module wirelessly connects to the one or more wearable devices and to the computing device. Herein, the wearable device processor recognize the voice-command of the user and send the voice command to the computing device to perform the action given by the user,
In an embodiment, the computing device processor executes picture identifier computer readable instructions that uses the image processing algorithm to perform segmentation of a particular location marked by the user using a finger, pen, stylus, and circle from the rest of the document and capture the exact location.
In an embodiment, the present invention relates to a method for searching and retrieval of data in digital document version of the same physical book, the method includes:
the user connects a computing device to the wearable device through the
wireless network with help of the networking module;
the user send a voice command with the help of the microphone to a
camera of the wearable device to capture one or more textual images
from a physical book ;
the wearable device processor sends captured image to the computing
device processor;
the computing device processor segment the at least one textual images
from the rest of the document by executing picture identifier computer
readable instructions that uses the image processing algorithm;
the information is retrieved from the at least one segmented textual
images and is stored in the memory of the computing device;
on command given by the user, the computing device processor executes
the picture identifier computer readable instructions that search the
location of the textual image in the digital document version of the same
physical book and digital audio file;
thus, the computing device processor takes the user to exact location in the digital document version of the same physical book where user has left reading in the physical in book.
In an embodiment, the present invention relates to a method of conducting
action based on the captured image, the method includes:
a method of locating text in the digital document version of the same
physical book, the method having
using a speech command, user point a particular location on image by
finger, stylus, pen, pencil,
the picture identifier computer readable instructions does the
segmentation of finger, stylus, pen, and pencil to capture the exact
location in image, and
thus, the computing device processor takes the user to exact location
in the digital document version of the same physical book; and
a method of conducting search based on the captured images, the method
having
a sequence of images is captured using the wearable device,
in the sequence of images, user draws some symbols that is interpreted
as different command,
user makes a circle around a text, and the drawing of circle is
interpreted as command to search for an answer, and
based on the command the computing device processor searches for
answer on internet and also perform certain action;
wherein, along with the above action the gesture command and speech
command are also given, this speech command is recognized and converted
into an action, manifested by the gesture.
In an embodiment, the present invention relates to a method for estimating
the page number infrom digital media, the method comprising:
the user send a voice command through the wearable device processor to the camera of the wearable device to capture plurality of pages from the physical book;
the computing device processor executes page estimation computer readable instructions that estimate about the content that would fit on one page of a physical book based on the content of the digital document version of the same physical book.
the user has scrolled over plurality of pages in the digital book or digital media, an estimate is generated regarding the location of the particular page in the physical book, and with the exact estimated location of the page, user is able to open the particular page on the physical book while wearing the wearable device, the wearable device again captures the page information to cross check if the estimate is correct; in case the estimated location of the page is correct, the wearable device processor send an augmented signal to the user mentioning the exact location of the point left on the page by the user; in case the estimated location of the page is not correct, the wearable device processor send a command to the camera of the wearable device to capture the image of the wrong page; and
the computing device processor executes page estimation computer readable instructions that provides suggestion to the user to turn after or before the wrong page in the physical book, and as soon as the right page is reached an augmented signals is given by the wearable device to the user Herein, page estimation computer readable instructions are neural network model the self learns from the error and increase the accuracy of estimation. In an embodiment, the present invention relates to a method for searching and retrieval of data in digital document version of the same physical book, the method includes:
the user connects a computing device to the wearable device through the wireless network with help of the networking module; the user send a voice command with the help of the one or more microphones to a camera of the wearable device to capture one or more textual images from a physical book ;
the wearable device processor sends captured image to the computing device processor
the computing device processor segment the at least one textual images from the rest of the document by executing picture identifier computer readable instructions that uses the image processing algorithm;
the information is retrieved from the at least one segmented textual images and is stored in the memory of the computing device; on command given by the user, the computing device processor executes the picture identifier computer readable instructions that search the location of the textual image in the digital document version of the same physical book and digital audio file;
thus, the computing device processor takes the user to exact location in the digital document version of the same physical book where user has left reading in the physical in book. In an embodiment, the present invention relates to a method of conducting action based on the captured image, the method includes:
a method of locating text in the digital document version of the same
physical book, the method having
using a speech command, user point a particular location on image by
finger, stylus, pen, pencil,
the picture identifier computer readable instructions does the
segmentation of finger, stylus, pen, and pencil to capture the exact
location in image, and
thus, the computing device processor takes the user to exact location
in the digital document version of the same physical book; and a method of conducting search based on the captured images, the method having
a sequence of images is captured using the one or more wearable
devices,
in the sequence of images, user draws some symbols that is interpreted
as different command,
user makes a circle around a text, and the drawing of circle is interpreted as command to search for an answer, and based on the command the computing device processor searches for answer on internet and also perform certain action; wherein, along with the above action the gesture command and speech command are also given, this speech command is recognized and converted into an action, manifested by the gesture.
In an embodiment, the present invention relates to a method for estimating the page number infrom digital media, the method comprising:
the user send a voice command through the wearable device processor to
the camera of the wearable device to capture plurality of pages from the
physical book;
the computing device processor executes page estimation computer
readable instructions that estimate about the content that would fit on one
page of a physical book based on the content of the digital document
version of the same physical book.
the user has scrolled over plurality of pages in the digital book or digital
media, an estimate is generated regarding the location of the particular
page in the physical book, and with the exact estimated location of the
page, user is able to open the particular page on the physical book while
wearing the wearable device, the wearable device again captures the page
information to cross check if the estimate is correct;
in case the estimated location of the page is correct, the wearable device
processor sends an augmented signal to the user mentioning the exact
location of the point left on the page by the user;
in case the estimated location of the page is not correct, the wearable
device processor sends a command to the camera of the wearable device
to capture the image of the wrong page; and
the computing device processor executes page estimation computer
readable instructions that provides suggestion to the user to turn after or
before the wrong page in the physical book, and as soon as the right page
is reached an augmented signals is given by the wearable device to the
user Herein, page estimation computer readable instructions are neural network model the self learns from the error and increase the accuracy of estimation.
Further objectives, advantages, and features of the present invention will become apparent from the detailed description provided herein below, in which various embodiments of the disclosed present invention are illustrated by way of example and appropriate reference to accompanying drawings. Those skilled in the art to which the present invention pertains may make modifications resulting in other embodiments employing principles of the present invention without departing from its spirit or characteristics, particularly upon considering the foregoing teachings. Accordingly, the described embodiments are to be considered in all respects only as illustrative, and not restrictive, and the scope of the present invention is, therefore, indicated by the appended claims rather than by the foregoing description or drawings. Consequently, while the present invention has been described with reference to particular embodiments, modifications of structure, sequence, materials and the like apparent to those skilled in the art still fall within the scope of the invention as claimed by the applicant.
We Claim
1. A system(lOO) for searching and retrieval of data in digital document from physical document and vice-versa , the said system(lOO) comprises:
an at least one wearable device(104), the at least one wearable device(104) is worn by the user, the wearable device(104) having
a camera(106), the camera(106) is affixed to the at least one wearable device(104), the camera(106) captures an textual image of the physical document in response to user control,
a networking module(l 12),
an at least one microphone(116), the at least one microphone(116) capture voice-command of the user,
a wearable device processor (114), the wearable device processor (114) controls all components of the at least one wearable device(104) that are the camera(106), the at least one microphone(116) and the networking module(l 12);
An at least one computing device(102), the at least one computing device(102) having
an computing device processor(118), the computing device processor(118) segment the textual images captured by the camera(106) from the physical book, and the computing device processor(118) executes picture identifier computer readable instructions that recognizes text and picture to search the location of the textual image in a digital document version of the same physical book and digital audio file, and the computing device processor(l 18)
executes page estimation computer readable instructions that estimates and indicates the page number in physical book based on the page number in the digital document version of the same physical book and again the computing device processor(l 18) executes a computer readable instruction for research, that captures phrase and question from segment of the textual images marked by the user and initiate research using Internet resources for finding answers to corresponding questions,
a memory(120), the memory(120) is configured to store information from the textual images, store information of all the commands and gestures performed by the user, store information from the textual images that are segmented by the computing device processor(118) in response to user control and also stores the picture identifier computer readable instructions, the page estimation computer readable instructions and the computer readable instructions for research;
Wherein, the networking module(l 12) wirelessly connects to the at least one wearable device(104) and to the computing device(102),
Wherein, the wearable device processor (114) recognize the voice-command of the user and send the voice command to the computing device(102) to perform the action given by the user,
.2. The computing device processor(118) as claimed in claim 1, wherein the computing device processor(118) executes picture identifier computer readable instructions that uses the image processing algorithm to perform segmentation of a particular location marked by the user using a finger, pen, stylus, and circle from the rest of the document and capture the exact location.
3. The wearable device(104) as claimed in claim 1, wherein the wearable device(104) is selected from a spectacles, helmet, VR head set, smart watches, smartphone.
4. The at least one computing device(102) as claimed in claim 1, wherein, the at least one computing device(102) is selected from a computer desktop, laptop, tab, smart phone.
5. The system(lOO) as claimed in claim 1, wherein a method for searching and retrieval of data in digital document version of the same physical book, the method comprising:
the user connects a computing device(102) to the wearable device(104) through the wireless network with help of the networking module(l 12);
the user send a voice command with the help of the at least one microphone(116) to a camera(106) of the wearable device(104) to capture an at least one textual images from a physical book ;
the wearable device processor (114) sends captured image to the computing device processor(l 18)
the computing device processor(118) segment the at least one textual images from the rest of the document by executing picture identifier computer readable instructions that uses the image processing algorithm;
the information is retrieved from the at least one segmented textual images and is stored in the memory of the computing device(102);
on command given by the user, the computing device processor(118) executes the picture identifier computer readable instructions that search the location of the textual image in the digital document version of the same physical book and digital audio file;
thus, the computing device processor(118) takes the user to exact location in the digital document version of the same physical book where user has left reading in the physical in book.
6. The method as claimed in claim 5, where a method of conducting action based on the captured image, the method comprising:
a method of locating text in the digital document version of the same
physical book, the method having
using a speech command, user point a particular location on image by
finger, stylus, pen, pencil,
the picture identifier computer readable instructions does the
segmentation of finger, stylus, pen, and pencil to capture the exact
location in image, and
thus, the computing device processor(118) takes the user to exact
location in the digital document version of the same physical book;
and a method of conducting search based on the captured images, the method having
a sequence of images is captured using the at least one wearable
device(104),
in the sequence of images, user draws some symbols that is interpreted
as different command,
user makes a circle around a text, and the drawing of circle is
interpreted as command to search for an answer, and
based on the command the computing device processor(l 18) searches
for answer on internet and also perform certain action;
wherein, along with the above action the gesture command and speech command are also given, this speech command is recognized and converted into an action, manifested by the gesture.
7. The system(lOO) as claimed in claim 1, wherein a method for estimating the page number infrom digital media, the method comprising:
the user send a voice command through the wearable device processor (114) to the camera(106) of the wearable device(104) to capture plurality of pages from the physical book;
the computing device processor(118) executes page estimation computer readable instructions that estimate about the content that would fit on one page of a physical book based on the content of the digital document version of the same physical book.
the user has scrolled over plurality of pages in the digital book or digital media, an estimate is generated regarding the location of the particular page in the physical book, and with the exact estimated location of the page, user is able to open the particular page on the physical book while wearing the wearable device(104), the wearable device(104) again captures the page information to cross check if the estimate is correct;
in case the estimated location of the page is correct, the wearable device processor(114) send an augmented signal to the user mentioning the exact location of the point left on the page by the user;
in case the estimated location of the page is not correct, the wearable device processor(114) send a command to the camera(106) of the wearable device(104) to capture the image of the wrong page; and
the computing device processor(l 18) executes page estimation computer
readable instructions that provides suggestion to the user to turn after or
before the wrong page in the physical book, and as soon as the right page
is reached an augmented signals is given by the wearable device(104) to
the user
8. The method as claimed in claim 7, wherein, page estimation computer
readable instructions is neural network model the self learns from the error
and increase the accuracy of estimation.
| # | Name | Date |
|---|---|---|
| 1 | 202211011352-STATEMENT OF UNDERTAKING (FORM 3) [02-03-2022(online)].pdf | 2022-03-02 |
| 2 | 202211011352-REQUEST FOR EXAMINATION (FORM-18) [02-03-2022(online)].pdf | 2022-03-02 |
| 3 | 202211011352-PROOF OF RIGHT [02-03-2022(online)].pdf | 2022-03-02 |
| 4 | 202211011352-POWER OF AUTHORITY [02-03-2022(online)].pdf | 2022-03-02 |
| 5 | 202211011352-FORM 18 [02-03-2022(online)].pdf | 2022-03-02 |
| 6 | 202211011352-FORM 1 [02-03-2022(online)].pdf | 2022-03-02 |
| 7 | 202211011352-DRAWINGS [02-03-2022(online)].pdf | 2022-03-02 |
| 8 | 202211011352-DECLARATION OF INVENTORSHIP (FORM 5) [02-03-2022(online)].pdf | 2022-03-02 |
| 9 | 202211011352-COMPLETE SPECIFICATION [02-03-2022(online)].pdf | 2022-03-02 |
| 10 | 202211011352-FER.pdf | 2025-03-13 |
| 1 | SearchStrategyE_18-07-2024.pdf |