Sign In to Follow Application
View All Documents & Correspondence

Technology Based Learning System For Visually Impaired Persons

Abstract: According to an aspect of the present disclosure, a tool kit is disclosed. The tool kit contains a sheet containing a tactile drawing, where at least one of (i) the tactile drawing and (ii) one or more parts of the tactile drawing, is indicated with a corresponding identifier in braille characters. The tool kit also contains a reader device containing a processing unit, a memory, an input unit containing braille keys where each of a first set of the braille keys is assigned with a corresponding braille character, and an output unit containing a speaker. The reader device is configured to receive an input via the first set of the braille keys, where at least a part of the input corresponds to an identifier, and provide an audio output based on the part of the input via the speaker. Figure 1

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
31 May 2024
Publication Number
23/2025
Publication Type
INA
Invention Field
ELECTRONICS
Status
Email
Parent Application

Applicants

Vembi Technologies Private limited
HD230, WeWork Embassy TechVillage, Block L, Devarabisanahalli, Outer Ring Rd, Bellandur, Bengaluru, Karnataka 560103, India

Inventors

1. Rajagopal N
49/1 Rainbow drive layout, Sarjapur Road, Bengaluru, Karnataka, India – 560035
2. Kartik P
Jain International Residential School, Jakkasandra post, Kanakapura Taluk, Ramanagara district, Karnataka, India – 562112
3. Supriya Dey
83-302 L&T SOUTH CITY, OFF BANNERGHATTA ROAD, Bengaluru, Karnataka, India - 560076

Specification

DESC:Priority Claim
[001] The instant patent application is related to and claims priority from the co-pending India provisional patent application entitled, “TECHNOLOGY-BASED LEARNING SYSTEM FOR VISUALLY IMPAIRED PERSONS”, Application No: 202441042512, Filed: 31-May-2024, attorney docket number: VEMB-301-INPR, which is incorporated in its entirety herewith.

Background of the Disclosure
[002] Technical Field
[003] The present disclosure relates to visual impairment, and more specifically to a technology-based learning system for visually impaired persons.
[004] Related Art
[005] Visual impairment refers to partial or total loss of vision of a person, impacting the person’s ability to see clearly.
[006] Technologies such as braille readers have been provided to aid learning of persons with visual impairment. As well known in the relevant arts, a braille reader is an electro-mechanical device containing an array of braille cells, each braille cell presenting a specific braille character.
[007] However, the inventors have realized that the state of art does not adequately extend the technologies for comprehensively addressing the various learning needs of the persons with visual impairment. For instance, the state of art does not have efficient mechanisms for aiding the persons with visual impairment in understanding drawings.
[008] Aspects of the present disclosure are directed to technological improvements that aid the persons with visual impairment in learning.

Brief Description of the Drawings
[009] Example embodiments of the present disclosure will be described with reference to the accompanying drawings briefly described below.
[010] Figure 1 is a diagram illustrating an example tool kit, according to an aspect of the present disclosure.
[011] Figure 2 is a flowchart illustrating the manner in which reader device of the present disclosure provides an audio output in response to inputs received via braille keys, according to an aspect of the present disclosure.
[012] Figure 3 depicts an example data structure implemented at reader device of the present disclosure.
[013] Figure 4 is a sequence diagram illustrating the interactions between reader device and a server system of the present disclosure.
[014] Figure 5 is a block diagram illustrating the details of a digital processing system in which various aspects of the present disclosure are operative by execution of appropriate executable modules.
[015] In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements. The drawing in which an element first appears is indicated by the leftmost digit(s) in the corresponding reference number.

Detailed Description of the Embodiments of the Disclosure
[016] 1. Overview
[017] Aspects of the present disclosure are directed to technology-based learning system for visually impaired persons.
[018] According to an aspect of the present disclosure, a tool kit is disclosed. The tool kit contains a sheet containing a tactile drawing, where at least one of (i) the tactile drawing and (ii) one or more parts of the tactile drawing, is indicated with a corresponding identifier in braille characters. The tool kit also contains a reader device containing a processing unit, a memory, an input unit containing braille keys where each of a first set of the braille keys is assigned with a corresponding braille character, and an output unit containing a speaker. The reader device is configured to receive an input via the first set of the braille keys, where at least a part of the input corresponds to an identifier, and provide an audio output based on the part of the input via the speaker.
[019] In an embodiment, the reader device is further configured to identify a file based on the part of the input, and the providing contains outputting contents of the file as the audio output.
[020] In another embodiment, the reader device is further configured to determine that the file is not present in the memory and send a request to a server system communicatively coupled with the reader device for the file, and receive the file in response to the request.
[021] In yet another embodiment, the reader device is further configured to send the request using a universal resource indicator (URI) of the server system, where device settings of the reader device are pre-configured to contain the URI.
[022] In yet another embodiment, the reader device supports a remote diagnosis mode, where the reader device is accessed from a remote device for diagnosis.
[023] According to another aspect of the present disclosure, a method performed at a reader device is disclosed. The method contains receiving an input, via a plurality of braille keys, where at least a part of the input corresponds to an identifier of at least one of (i) a tactile drawing and (ii) one or more parts of the tactile drawing. The method also contains providing, via a speaker, an audio output based on the at least a part of the input.
[024] In an embodiment, the method further contains identifying a file based on the at least a part of the input, and the providing contains outputting contents of the file as the audio output.
[025] In another embodiment, the method further contains determining that the file is not present in a memory of the reader device and sending a request to a server system communicatively coupled with the reader device for the file, and receiving the file in response to the request.
[026] In yet another embodiment, the method further contains sending the request using a universal resource indicator (URI) of the server system, where device settings of the reader device are pre-configured to contain the URI.
[027] According to yet another aspect of the present disclosure, a reader device is disclosed. The reader device contains a processing unit, a memory, an input unit containing braille keys where each of a first set of the braille keys is assigned with a corresponding braille character, and an output unit containing a speaker. The reader device is configured to receive an input via the first set of the plurality of braille keys, where at least a part of the input corresponds to an identifier of at least one of (i) a tactile drawing and (ii) one or more parts of said tactile drawing. The reader device is further configured to provide, via the speaker, an audio output based on the at least a part of the input.
[028] Several aspects of the present disclosure are described below with reference to examples for illustration. However, one skilled in the relevant art will recognize that the disclosure can be practiced without one or more of the specific details or with other methods, components, materials and so forth. In other instances, well-known structures, materials, or operations are not shown in detail to avoid obscuring the features of the disclosure. Furthermore, the features/aspects described can be practiced in various combinations, though only some of the combinations are described herein for conciseness.
[029] 2. Example Toolkit
[030] Figure 1 is a diagram illustrating an example tool kit 100, according to an aspect of the present disclosure. Tool kit 100 is shown containing tactile drawing sheet 110 and reader device 150. Merely for illustration, only representative number/type of elements/components is shown in Figure 1. Many environments often contain many more elements, both in number and type, depending on the purpose for which the environment is designed. Each element of Figure 1 is described below in further detail.
[031] Tactile drawing sheet 110 is a sheet on which tactile drawings can be created. Tactile drawing sheet 110 can be a paper, leather sheet, plastic sheet, resin sheet, metallic sheet etc., as well-known in the relevant arts. Tactile drawing sheet 110 is shown containing an example tactile drawing 120. Drawing identifier 125 (in braille characters) represents an identifier of tactile drawing 120, and part identifiers 131-135 (in braille characters) represent corresponding identifiers of the parts of tactile drawing 120.
[032] Though drawing identifier 125 and part identifiers 131-135 are being shown as numbers (in braille characters) in Figure 1, it is understood that drawing identifier 125 and part identifiers 131-135 can be any characters. Also, though Figure 1 shows only one tactile drawing sheet 110, it is understood that aspects of the present disclosure are equally applicable to multiple drawing sheets or a book containing multiple drawings. It may also be understood that additional identifiers may be used to represent a page number, book number, etc., as would be apparent to a skilled practitioner from the present disclosure.
[033] Reader device 150 represents a digital processing system capable of receiving inputs corresponding to braille characters and providing audio outputs based on the inputs. Reader device 150 is shown containing power button 155, speaker 160, remote diagnosis button 165, and braille keys 171-182. Merely for illustration, only representative number/type of elements/components is shown in reader device 150. Many environments often contain many more elements, both in number and type, depending on the purpose for which the environment is designed. Each element of reader device 150 is described below in further detail.
[034] Braille keys 171-182 form a part of input unit of reader device 150, and are used to provide inputs to reader device 150. Braille keys 171-182 are marked/embossed with corresponding braille characters or functions. Braille keys 171-179 and 180 are character keys, assigned respectively with numbers 1-9 and 0 (in braille). Braille keys 181 and 182 are function keys, assigned with corresponding functions ‘Enter’ and ‘Clear’ (‘Backspace’) respectively. In an embodiment, reader device 150 is configured to voice out the input received (via speaker 160), so that a user can either confirm the input by pressing braille key 181 (‘Enter’ key) or clear some or whole of the input by pressing braille key 182 (‘Clear’ key).
[035] Though Figure 1 is shown containing the character keys assigned with numbers alone, it is understood that reader device 150 may also contain keys assigned with letters, punctuations etc. Also, though Figure 1 is shown containing only ‘Enter’ and ‘Clear’ function keys, reader device 150 may also contain other function keys. The assignment of the characters, functions etc., to braille keys 171-182 can be made using techniques known in the relevant art. Also, reader device 150 is suitably configured to interpret inputs received via braille keys 171-182, using techniques known in the relevant arts.
[036] Speaker 160 forms a part of output unit of reader device 150. Speaker 160 is a device that converts electrical signals into audible sound waves (audio output). Speaker 160 can be implemented using techniques known in the relevant art. In an embodiment, the audio output may be configured to be in a language of user’s choice.
[037] Remote diagnosis button 165 is a physical button that facilitates enabling/disabling of remote diagnosis mode for reader device 150. In remote diagnosis mode, reader device 150 may be accessed by a support device (not shown in the Figures), including (i) reading all folders/files in reader device 150, (ii) writing into folders/files of reading device 150, (iii) issuing debug commands, (iv) monitoring of debug messages for errors, failures, heap/stack errors etc., (v) issuing commands to format the memory of reader device 150 and download system files from a server system (not shown in Figure 1). Remote diagnosis may be implemented using techniques known in the relevant art. In an example implementation, reader device 150 uses a ‘WebSocket’ mechanism well-known in the relevant arts to communicate with the support device, so as to enable the support device to perform the remote diagnosis.
[038] Though Figure 1 depicts a designated button (remote diagnosis button 165) for enabling/disabling the remote diagnosis mode, in alternative implementations, such a feature may be implemented using any other key/combination of keys as will be apparent to a skilled practitioner.
[039] Power button 155 is a physical button that facilitates reader device 150 to be switched ON or OFF. Power button 155 can be implemented using techniques known in the relevant art.
[040] Reader device 150 may also communicate with the server system (not shown in Figure 1) via a network (also not shown in Figure 1), and may receive files (such as configuration files, data files etc.) from the server system. In an embodiment, the configuration files (that facilitate changes in the device settings) are pushed to reader device 150. Such pushing of configuration files is helpful in that the persons with visual impairment do not have to go through the hassles of device settings at their end.
[041] In an example implementation, reader device 150 communicates with the server system and the support device via a hotspot of another device. However, in alternative implementations, reader device 150 may use any other communication techniques, as will be apparent to a skilled practitioner. In an example implementation, the device settings of reader device 150 are pre-configured with such hotspot (or wireless) settings.
[042] In an embodiment, reader device 150 communicates with the server system using a client-server protocol known in the relevant arts, such as API calls/WiFi protocols. However, in alternative embodiments, reader device 150 may communicate with the server system using any other communication technique/protocol known in the relevant arts.
[043] Also, in an embodiment, reader device 150 communicates with the server system using a universal resource indicator (URI) of the server system. In an example implementation, the device settings of reader device 150 are pre-configured to contain the URI. However, in alternative embodiments, reader device 150 may communicate with the server system using any other techniques known in the relevant arts.
[044] Though not shown in Figure 1, reader device 150 may also contain a volume controller, charging unit, USB port etc. In an example implementation, reader device 150 is a battery powered device, and can be charged from an external device using the USB port. However, in alternative implementations, reader device 150 may be charged using any other techniques known in the relevant arts. Also, in an embodiment, reader device 150 may indicate (with sounds, for example, a beep sound) when the battery charging is below or above certain thresholds. Such techniques may be implemented using techniques known in the relevant art.
[045] In an example implementation, reader device 150 is a compact digital processing system with dimensions of 17 cm x 6 cm x 2.5 cm and weighing approximately 170 grams, thus being easily portable. However, in alternative implementations, the dimensions may be different.
[046] In an embodiment, reader device 150 is assigned with a unique serial number, and the server system stores the settings of reader device 150, indexed with the unique serial number.
[047] The description is continued below with respect to the manner in which reader device 150 provides an audio output in response to inputs received via braille keys 171-182.
[048] 3. Flowchart
[049] Figure 2 is a flowchart illustrating the manner in which reader device 150 provides an audio output in response to inputs received via braille keys, according to an aspect of the present disclosure. The flowchart is described with respect to reader device 150 of Figure 1. It is understood that the steps of the flowchart may be performed by a processing unit (such as CPU) of reader device 150. However, many of the features can be implemented in other elements/systems and/or other environments also without departing from the scope and spirit of several aspects of the present disclosure, as will be apparent to one skilled in the relevant arts by reading the disclosure provided herein.
[050] In addition, some of the steps may be performed in a different sequence than that depicted below, as suited to the specific environment, as will be apparent to one skilled in the relevant arts. Many of such implementations are contemplated to be covered by several aspects of the present disclosure.
[051] The flow chart begins in step 201, in which control immediately passes to step 220.
[052] In step 220, reader device 150 receives an input via braille keys. In an embodiment, at least a part of the input corresponds to an identifier, where the identifier can be drawing identifier 125 or a part identifier 131-135 or a combination. However, in alternative embodiments, the identifier can also include a page identifier, a book identifier etc., as would be apparent from the present disclosure. Control passes to step 240.
[053] In step 240, reader device 150 identifies a file based on at least the part of the input corresponding to the identifier. In an embodiment, the file is an audio file containing contents related to description of the drawing or the part of the drawing identified by the identifier. Techniques as will be apparent to a skilled practitioner may be used for identifying the file. For example, reader device 150 may use a data structure (such as a Look-up table (LUT)) in identifying the file. In an embodiment, the identification of the file also includes determining that the file is not present in a memory of reader device 150, and sending a request to the server system for the file. Control passes to step 260.
[054] In step 260, reader device 150 provides an audio output via speaker 160. The audio output is based on the contents of the identified file. In an embodiment, providing the audio output includes outputting the contents related to the description of the drawing or the part of the drawing identified by the identifier as the audio output. In an embodiment, reader device 150 outputs the audio in a language of the user’s choice. In an embodiment, providing the audio output also includes receiving the file from the server system, and outputting the audio output.
[055] The flowchart ends in step 299.
[056] Thus, the flowchart of Figure 2 operates to reduce fatigue (for visually impaired persons) associated with learning or understanding drawings by touch/perception. Specifically, some drawings may be complex in nature, and to learn or understand such drawings merely by touch may be time consuming and may cause fatigue in the visually impaired persons. By providing the contents related to description of the drawing or the part of the drawing as an audio output, the fatigue may be reduced.
[057] Also, the flowchart of Figure 2 operates to provide an interactive means, wherein the user has the option of entering the identifier of his choice, and listening to the audio corresponding to that identifier alone. Such a solution may save time for the user, as he need not listen to the portions that he does not wish to listen.
[058] Further, the flowchart of Figure 2 operates to provide economic significance, as the cost associated with printing of tactile sheets with description of the drawings/drawing parts may be avoided. Further, the claimed tool kit has economic significance in that the same reader device 150 can be used by users of multiple languages by changing the language settings.
[059] The description is continued below with respect to working of the example tool kit.
[060] 4. Working of the Example Tool Kit
[061] A user with visual impairment may perceive (by touch) tactile drawing 120 on tactile drawing sheet 110, and may provide the identifier corresponding to the drawing or a part of the drawing for which the user wishes to know details/description, as an input to reader device 150. Reader device 150 receives the input and identifies an audio file based on the input, by examining a data structure that maps identifiers to memory locations where corresponding audio files are saved. The data structure may be implemented using techniques known in the relevant arts.
[062] In an embodiment, the data structure is implemented as LUT 300, an example of which is illustrated in Figure 3. As illustrated in Figure 3, column 302 (“Drawing Identifier”) specifies identifier of a drawing, column 304 (“Part Identifier”) specifies identifier of part of a drawing, and column 306 (“Location of File”) specifies location of corresponding audio file in a memory of reader device 150.
[063] Thus, row 310 specifies “061” as drawing identifier (column 302), “1” as part identifier, and “Location 1” as the location where the corresponding audio file is saved. Similarly, the other rows of LUT 300 specify the locations where the corresponding audio files are saved.
[064] Though Figure 3 illustrates LUT 300 as having columns based on the drawing identifier and the part identifier, in alternative implementations, LUT 300 may have only one of the drawing identifier and the part identifier, or LUT 300 may have additional identifiers, etc. The aspects of the present disclosure are applicable in such alternative implementations as well.
[065] Reader device 150 identifies the location of the audio file based on the identifier in the input received, and provides the contents of the audio file as an audio output.
[066] If reader device 150 determines that LUT 300 does not have an entry corresponding to an identifier (which means the corresponding audio file is not present in reader device 150), reader device 150 sends a request to the server system for the audio file corresponding to the identifier. In an embodiment, reader device 150 may send such request only upon receiving a confirmation input from the user. In an example implementation, upon determining that LUT 300 does not have the entry corresponding to the identifier, reader device 150 plays a voice message informing the user that the corresponding file is not present in reader device 150, and prompts the user to give confirmation (by pressing ‘Enter’ key) to send the request to the server system for the file.
[067] The server system, upon receiving the request, examines its own data structure (similar to LUT 300), and provides the corresponding audio file to reader device 150. If an audio file requested by reader device 150 is not available at the server system, the server system causes the audio file to be created using techniques known in the relevant arts. In an example implementation, the server system causes a text file to be converted into an audio file using a Text-to-speech (TTS) software known in the relevant arts, and then provides the audio file to reader device 150. Also, upon receiving the request, the server system causes translation of an audio file, if the language of the audio file is different from the language preference of the user associated with reader device 150 from which the request is received. In an embodiment, the server system stores the language preferences of the users. Techniques known in the relevant arts may be used for storing such preferences. Also, reader device 150 may include the language preference in the request. Techniques known in the relevant arts may be used for including such details in the requests.
[068] Reader device 150 saves the received audio file in the memory (of reader device 150), and provides the contents of the received audio file as the output. Reader device 150 may also update LUT 300 accordingly.
[069] Figure 4 is a sequence diagram illustrating the interactions between reader device 150 and server system 490 of the present disclosure.
[070] A user with visual impairment perceives (by touch) tactile drawing 120 on tactile drawing sheet 110, and wishes to know details about the part of tactile drawing 120 identified by part identifier 131. The user provides drawing identifier 125 (corresponding to number ‘061’ in braille) by pressing braille keys 180, 176 and 171 in that sequence, followed by part identifier 131 (corresponding to number ‘1’ in braille) by pressing braille key 171, and then followed by braille key 181 (‘Enter’ key). Reader device 150 receives the input (corresponding to drawing identifier 125 and part identifier 131) in step 421. Reader device 150 examines LUT 300 and identifies “Location 1” as the location of the corresponding audio file. In step 422, reader device 150 provides the contents of the audio file at “Location 1” as the output.
[071] In step 423, the user provides drawing identifier 125 (corresponding to number ‘061’ in braille), followed by part identifier 134 (corresponding to number ‘4’ in braille) by pressing braille key 174, and then followed by braille key 181 (‘Enter’ key), and reader device 150 receives the input (corresponding to drawing identifier 125 and part identifier 134). Reader device 150 examines LUT 300 and identifies that LUT 300 does not have a corresponding entry. In step 424, reader device 150 sends a request to server system 490 for the audio file corresponding to drawing identifier 125 and part identifier 134. Server system 490 examines its own data structure based on drawing identifier 125 and part identifier 134, and sends the corresponding audio file to reader device 150 in step 425. In step 426, reader device 150 provides the audio output.
[072] Thus, the tool kit of the present disclosure operates to reduce fatigue associated with learning or understanding drawings by touch/perception, provide an interactive means, wherein the user has the option of entering the identifier of his choice, and listening to the audio corresponding to that identifier, and has economic significance.
[073] It should be further appreciated that the features described above can be implemented in various embodiments as a desired combination of one or more of hardware, software, and firmware. The description is continued with respect to an embodiment in which various features are operative when the software instructions described above are executed.
[074] 5. Digital Processing System
[075] Figure 5 is a block diagram illustrating the details of digital processing system 500 in which various aspects of the present disclosure are operative by execution of appropriate executable modules. Digital processing system 500 may correspond to reader device 150 and/or server system 490.
[076] Digital processing system 500 may contain one or more processors such as a central processing unit (CPU) 510, random access memory (RAM) 520, secondary memory 530, graphics controller 560, display unit 570, speaker 575, network interface 580, and input interface 590. All the components except display unit 570 and speaker 575 may communicate with each other over communication path 550, which may contain several buses as is well-known in the relevant arts. Merely for illustration, only representative number/type of elements/components is shown in Figure 5. Many environments often contain many more elements, both in number and type, depending on the purpose for which the environment is designed. It is understood that reader device 150 and/or server system 490 may not have all the components of Figure 5 in some embodiments. For example, reader device 150 may be implemented without graphics controller 560 and display unit 570. Also, in reader device 150 input interface 590 may correspond to a keypad/keyboard containing braille keys. The components of Figure 5 are described below in further detail.
[077] CPU 510 may execute instructions stored in RAM 520 to provide several features of the present disclosure. CPU 510 may contain multiple processing units, with each processing unit potentially being designed for a specific task. Alternatively, CPU 510 may contain only a single general-purpose processing unit. In addition, CPU 510 may be supported by CAM (content addressable memory) structures for examination of complex patterns.
[078] RAM 520 may receive instructions from secondary memory 530 using communication path 750. RAM 520 is shown currently containing software instructions constituting shared environment 525 and/or other user programs 526. In addition to shared environment 525, RAM 520 may contain other software programs such as device drivers, virtual machines, etc., which provide a (common) run time environment for execution of other/user programs.
[079] Graphics controller 560 generates display signals (e.g., in RGB format) to display unit 570 based on data/instructions received from CPU 510. Display unit 570 contains a display screen to display the images defined by the display signals. Speaker 575 converts electrical signals into audible sound waves based on data/instructions received from CPU 510. Input interface 590 may correspond to a keyboard and a pointing device (e.g., touch-pad, mouse) and may be used to provide inputs. Network interface 580 provides connectivity to a network (e.g., using Internet Protocol), and may be used to communicate with other systems (of Figure 4) connected to the networks.
[080] Secondary memory 530 may contain hard drive 535, flash memory 536, and removable storage drive 537. Secondary memory 530 may store the data and software instructions (for example, for implementing the various features of the present disclosure as shown in Figure 2 and 4 etc.), which enable digital processing system 500 to provide several features in accordance with the present disclosure. The code/instructions stored in secondary memory 530 may either be copied to RAM 520 prior to execution by CPU 510 for higher execution speeds, or may be directly executed by CPU 510.
[081] Some or all of the data and instructions may be provided on removable storage unit 540, and the data and instructions may be read and provided by removable storage drive 537 to CPU 510. Removable storage unit 540 may be implemented using medium and storage format compatible with removable storage drive 537 such that removable storage drive 537 can read the data and instructions. Thus, removable storage unit 540 includes a computer readable (storage) medium having stored therein computer software and/or data. However, the computer (or machine, in general) readable medium can be in other forms (e.g., non-removable, random access, etc.).
[082] In this document, the term “computer program product” is used to generally refer to removable storage unit 540 or hard disk installed in hard drive 535. These computer program products are means for providing software to digital processing system 500. CPU 510 may retrieve the software instructions, and execute the instructions to provide various features of the present disclosure described above.
[083] The term “storage media/medium” as used herein refers to any non-transitory media that store data and/or instructions that cause a machine to operate in a specific fashion. Such storage media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical disks, magnetic disks, or solid-state drives, such as storage memory 530. Volatile media includes dynamic memory, such as RAM 520. Common forms of storage media include, for example, a floppy disk, a flexible disk, hard disk, solid-state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge.
[084] Storage media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between storage media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 550. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.
[085] Reference throughout this specification to “one embodiment”, “an embodiment”, or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, appearances of the phrases “in one embodiment”, “in an embodiment” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.
[086] Further, the described features, structures, or characteristics of the disclosure may be combined in any suitable manner in one or more embodiments. In the above description, numerous specific details are provided to provide a thorough understanding of embodiments of the disclosure.
[087] It should be understood that the figures and/or screen shots illustrated in the attachments highlighting the functionality and advantages of the present disclosure are presented for example purposes only. The present disclosure is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the accompanying figures.
[088] 6. Conclusion
[089] While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
[090] It should be understood that the figures illustrated in the attachments highlighting the functionality and advantages of the present disclosure are presented for example purposes only. The present disclosure is sufficiently flexible and configurable, such that it may be utilized in ways other than that shown in the accompanying figures.
[091] Further, the purpose of the following Abstract is to enable the Patent Office and the public generally, and especially the scientists, engineers and practitioners in the art who are not familiar with patent or legal terms or phraseology, to determine quickly from a cursory inspection the nature and essence of the technical disclosure of the application. The Abstract is not intended to be limiting as to the scope of the present disclosure in any way.
,CLAIMS:1. A tool kit comprising:
a sheet containing a tactile drawing, wherein at least one of (i) said tactile drawing and (ii) one or more parts of said tactile drawing, being indicated with a corresponding identifier in braille characters;
a reader device comprising:
a processing unit;
a memory;
an input unit comprising a plurality of braille keys, wherein each of a first set of said plurality of braille keys is assigned with a corresponding braille character; and
an output unit comprising a speaker;
wherein said reader device is configured to:
receive an input via said first set of said plurality of braille keys, wherein at least a part of said input corresponds to an identifier; and
provide, via said speaker, an audio output based on said at least a part of said input.

2. The tool kit of claim 1, wherein said reader device is further configured to identify a file based on said at least a part of said input, and
wherein said providing comprising outputting contents of said file as said audio output.

3. The tool kit of claim 2, wherein said reader device is further configured to:
determine that said file is not present in said memory and send a request to a server system communicatively coupled with said reader device for said file; and
receive said file in response to said request.

4. The tool kit of claim 3, wherein said reader device is further configured to send said request using a universal resource indicator (URI) of said server system,
wherein device settings of said reader device are pre-configured to comprise said URI.

5. The tool kit of claim 1, wherein said reader device supports a remote diagnosis mode, wherein said reader device is accessed from a remote device for diagnosis.

6. A method being performed at a reader device, the method comprising:
receiving an input, via a plurality of braille keys, where at least a part of said input corresponding to an identifier of at least one of (i) a tactile drawing and (ii) one or more parts of said tactile drawing; and
providing, via a speaker, an audio output based on said at least a part of said input.

7. The method of claim 6, further comprising identifying a file based on said at least a part of said input, and
wherein said providing comprising outputting contents of said file as said audio output.

8. The method of claim 7, further comprising:
determining that said file is not present in a memory of said reader device and sending a request to a server system communicatively coupled with said reader device for said file; and
receiving said file in response to said request.

9. The method of claim 8, further comprising:
sending said request using a universal resource indicator (URI) of said server system,
wherein device settings of said reader device are pre-configured to comprise said URI.

10. A reader device comprising:
a processing unit;
a memory;
an input unit comprising a plurality of braille keys, wherein each of a first set of said plurality of braille keys is assigned with a corresponding braille character; and
an output unit comprising a speaker;
wherein said reader device is configured to:
receive an input via said first set of said plurality of braille keys, where at least a part of said input corresponds to an identifier of at least one of (i) a tactile drawing and (ii) one or more parts of said tactile drawing; and
provide, via said speaker, an audio output based on said at least a part of said input.

Documents

Application Documents

# Name Date
1 202441042512-PROVISIONAL SPECIFICATION [31-05-2024(online)].pdf 2024-05-31
2 202441042512-FORM FOR STARTUP [31-05-2024(online)].pdf 2024-05-31
3 202441042512-FORM FOR SMALL ENTITY(FORM-28) [31-05-2024(online)].pdf 2024-05-31
4 202441042512-FORM 1 [31-05-2024(online)].pdf 2024-05-31
5 202441042512-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [31-05-2024(online)].pdf 2024-05-31
6 202441042512-Request Letter-Correspondence [01-10-2024(online)].pdf 2024-10-01
7 202441042512-Power of Attorney [01-10-2024(online)].pdf 2024-10-01
8 202441042512-FORM28 [01-10-2024(online)].pdf 2024-10-01
9 202441042512-FORM-26 [01-10-2024(online)].pdf 2024-10-01
10 202441042512-Form 1 (Submitted on date of filing) [01-10-2024(online)].pdf 2024-10-01
11 202441042512-Covering Letter [01-10-2024(online)].pdf 2024-10-01
12 202441042512-FORM 3 [25-11-2024(online)].pdf 2024-11-25
13 202441042512-Proof of Right [10-01-2025(online)].pdf 2025-01-10
14 202441042512-DRAWING [29-05-2025(online)].pdf 2025-05-29
15 202441042512-CORRESPONDENCE-OTHERS [29-05-2025(online)].pdf 2025-05-29
16 202441042512-COMPLETE SPECIFICATION [29-05-2025(online)].pdf 2025-05-29
17 202441042512-STARTUP [03-06-2025(online)].pdf 2025-06-03
18 202441042512-FORM28 [03-06-2025(online)].pdf 2025-06-03
19 202441042512-FORM-9 [03-06-2025(online)].pdf 2025-06-03
20 202441042512-FORM 18A [03-06-2025(online)].pdf 2025-06-03
21 202441042512-Proof of Right [12-08-2025(online)].pdf 2025-08-12