Claims:WE CLAIM
1. A machine-vision enabled tactical scope (100) comprising:
• an image capturing unit (110) configured to sequentially capture and generate video frames of the surrounding environment in real-time;
• a processing unit (120) configured to cooperate with said image capturing unit (110) to receive said video frames therefrom, said processing unit (120) comprising:
o a segmentation module (122) configured to apply vision based segmentation technique to each of said received video frames to segment the scene in each of said received frames into various regions, and
o an identification module (124) configured to detect and mask at least one target object from said segmented frames, and further configured to generate output frames having said masked target object; and
• a display unit (130) configured to cooperate with said processing unit (120) to receive said output frames, and further configured to display said masked target object.
2. The scope (100) as claimed in claim 1, wherein said processing unit (120) employs deep learning based neural network techniques to perform scene segmentation and detection and masking of the target object.
3. The scope (100) as claimed in claim 1, wherein said identification module (124) comprises:
o a memory (126) configured to store a pre-trained neural network model, wherein said pre-trained model correlates a set of learned features with the targets to be identified;
o an extractor (128) configured to receive said segmented video frames, and further configured to extract dominant features from said received frames; and
o an identifier (129) configured to cooperate with said extractor to receive said extracted features, and further configured to cooperate with said memory to identify and mask the target objects in said segmented video frame by mapping said received features into said pre-trained neural network model, said identifier (129) configured to generate said output frame containing said masked target objects.
4. The scope (100) as claimed in claim 1, wherein said image capturing unit (110) includes an optical sighting device configured to magnify the range of vision of said image capturing unit (110).
5. The scope (100) as claimed in claim 1, which includes at least one IOT communication module to facilitate integration with at least one sensor for improving the accuracy of detection of the location of the target in said frame.
6. The scope (100) as claimed in claim 5, wherein said sensor is selected from the group consisting of radio frequency sensor, ultrasonic sensor, sonic sensor, and motion sensor.
7. A rifle (140) comprising said machine-vision enabled tactical scope (100) as claimed in claim 1, mounted thereon.
, Description:FIELD
The present disclosure relates to the field of tactical scopes.
BACKGROUND
The background information herein below relates to the present disclosure but is not necessarily prior art.
A telescopic sight, commonly called a scope, is an optical sighting device having a reticle mounted in a focally appropriate position to give an accurate point of aim. The scope is used with all types of systems that require accurate aiming but require more magnification than other types of sights such as iron sights, reflector (reflex) sights, holographic sights or laser sights, and are most commonly found on firearms, particularly rifles.
Conventional scopes rely more on the ability of an operator, which may sometimes cause failure in identification of the target. More specifically, the operator has to focus on the target first using the scope, and then aim at it. However, at certain times, the operator may fail to identify the target at a larger distance or in a crowd. Recent advancements in the field of the scope have led to the development of a scope having a camera unit. However, more than often, addition of the camera housing and camera unit adds to the weight of the rifle. As a result, the process of taking the shot at the target becomes quite cumbersome, and additionally requires the operator to ensure proper camera alignment with respect to the line of sight. Moreover, the scope having the camera unit, like the conventional scope, requires the operator to identify the target, thereby increasing the chances of failure in identification of the target and the subsequent causes.
There is therefore felt a need for a scope that alleviates the aforementioned drawbacks.
OBJECTS
Some of the objects of the present disclosure, which at least one embodiment herein satisfies, are as follows:
An object of the present disclosure is to provide a machine-vision enabled tactical scope.
Another object of the present disclosure is to provide a machine-vision enabled tactical scope which does not depend on the expertise of an operator to identify a target.
Yet another object of the present disclosure is to provide a machine-vision enabled tactical scope which ensures proper camera alignment with respect to the line of sight to facilitate accurate aiming.
Still another object of the present disclosure is to provide a machine-vision enabled tactical scope which is light in weight and can be easily mounted on equipments such as rifles.
Still another object of the present disclosure is to provide a machine-vision enabled tactical scope that can automatically identify and lock on a target.
Yet another object of the present disclosure is to provide a machine-vision enabled tactical scope that is modular and can act independently or be integrated with the traditional scopes.
Still another object is to provide a machine-vision enabled tactical scope that can be mounted on robots for autonomous locomotion and surveillance.
Other objects and advantages of the present disclosure will be more apparent from the following description, which is not intended to limit the scope of the present disclosure.
SUMMARY
The present disclosure envisages a machine-vision enabled tactical scope. The scope comprises an image capturing unit, a processing unit and a display unit. The image capturing unit is configured to sequentially capture and generate video frames of the surrounding environment in real-time. The processing unit is configured to cooperate with the image capturing unit to receive the video frames therefrom. The processing unit comprises a segmentation module and an identification module. The segmentation module is configured to apply vision based segmentation technique to each of the received video frames to segment the scene in each of the received frames into various regions. The identification module is configured to detect and mask at least one target object from the segmented frames, and is further configured to generate output frames having the masked target object. The display unit is configured to cooperate with the processing unit to receive the output frames, and is further configured to display the masked target object.
In an embodiment, the processing unit employs deep neural network techniques to perform scene segmentation and detection and masking of the target object.
In another embodiment, the identification module comprises a memory, an extractor and an extractor. The memory is configured to store a pre-trained neural network model, wherein the pre-trained model correlates a set of learned features with the targets to be identified. The extractor is configured to receive the segmented video frames, and is further configured to extract dominant features from the received frames. The identifier is configured cooperate with the extractor to receive the extracted features, and is further configured to cooperate with the memory to identify and mask the target objects in the segmented video frame by mapping the received features into the pre-trained neural network model. The identifier is configured to generate the output frame containing masked target objects.
In yet another embodiment, the image capturing unit includes an optical sighting device configured to magnify the range of vision of the image capturing unit.
In still another embodiment, the scope includes which includes at least one IOT communication module to facilitate integration with at least one sensor for improving the accuracy of detection of the location of the target in the frame. The sensor is selected from the group consisting of radio frequency sensor, ultrasonic sensor, sonic sensor, and motion sensor.
In one embodiment, the present disclosure envisages a rifle comprising the machine-vision enabled tactical scope, of the present disclosure, mounted thereon.
BRIEF DESCRIPTION OF THE ACCOMPANYING DRAWING
A machine-vision enabled tactical scope of the present disclosure will now be described with the help of the accompanying drawing, in which:
Figure 1A illustrates an isometric view of the scope of the present disclosure;
Figure 1B illustrates an exploded view of the scope of Figure 1A;
Figure 2 illustrates a block diagram of the scope of Figure 1B; and
Figure 3 illustrates an isometric of the scope of Figure 1A mounted on a rifle.
LIST OF REFERNCE NUMERALS
100 machine-vision enabled tactical scope
110 image capturing unit
120 processing unit
122 segmentation module
124 identification module
126 memory
128 extractor
129 identifier
130 display unit
140 rifle
DETAILED DESCRIPTION
Embodiments, of the present disclosure, will now be described with reference to the accompanying drawing.
Embodiments are provided so as to thoroughly and fully convey the scope of the present disclosure to the person skilled in the art. Numerous details are set forth, relating to specific components, and methods, to provide a complete understanding of embodiments of the present disclosure. It will be apparent to the person skilled in the art that the details provided in the embodiments should not be construed to limit the scope of the present disclosure. In some embodiments, well-known processes, well-known apparatus structures, and well-known techniques are not described in detail.
The terminology used, in the present disclosure, is only for the purpose of explaining a particular embodiment and such terminology shall not be considered to limit the scope of the present disclosure. As used in the present disclosure, the forms "a,” "an," and "the" may be intended to include the plural forms as well, unless the context clearly suggests otherwise. The terms "comprises," "comprising," “including,” and “having,” are open ended transitional phrases and therefore specify the presence of stated features, elements, modules, units and/or components, but do not forbid the presence or addition of one or more other features, elements, components, and/or groups thereof.
When an element is referred to as being "mounted on" another element, it may be directly on, mounted on the other element.
A machine-vision enabled tactical scope (100) of the present disclosure will now be described with reference to Figure 1 through Figure 3.
The machine-vision enabled tactical scope (100) (hereinafter referred to as ‘the scope (100)’) comprises an image capturing unit (110), a processing unit (120), and a display unit (130). The image capturing unit (110) is configured to sequentially capture and generate video frames of the surrounding environment in real-time. The processing unit (120) is configured to cooperate with the image capturing unit (110) to receive the video frames therefrom. The processing unit (120) is further configured to generate output frames having the target object identified therein. More specifically, the processing unit (120) comprises a segmentation module (122) and an identification module (124). The segmentation module (122) is configured to apply vision based segmentation technique to each of the received video frames to segment the scene in each of the received frames into various regions. The identification module (124) is configured to detect and mask at least one target object from the segmented frames. The identification module (124) is further configured to generate output frames having the masked target object. The display unit (130) is configured to cooperate with the processing unit (120) to receive the output frames generated by the identification module (124), and is further configured to display the masked target object.
In an embodiment, the processing unit (120) employs deep learning based neural network techniques to perform scene segmentation and detection and masking of the target object. As a result, the scope (100) does not need the expertise of an operator for identification of the target. Further, proper camera alignment with respect to the line of sight is always ensured by the processing unit (120) to facilitate accurate aiming at the target.
In another embodiment, the identification module (124) comprises a memory (126), an extractor (128) and an identifier (129). The memory (126) is configured to store a pre-trained neural network model, wherein said pre-trained model correlates a set of learned features with the targets to be identified. The extractor (128) is configured to receive the segmented video frames, and is further configured to extract dominant features from the received frames. The identifier (129) is configured to cooperate with said extractor to receive said extracted features, and further configured to cooperate with said memory to identify and mask the target objects in the segmented video frame by mapping said received features into said pre-trained neural network model. The identifier (129) is further configured to generate the output frame containing the masked target objects.
In a preferred embodiment, the image capturing unit (110) includes an optical sighting device. The optical sighting device is configured to magnify the range of vision of the image capturing unit (110).
In one embodiment, the scope (100) includes at least one IOT communication module to facilitate integration with at least one sensor for improving the accuracy of detection of the location of the target in said frame. In another embodiment, the scope (100) further includes a sensor configured to sense the distance of the target from the operator. In one embodiment, the IOT communication module can be a WIFI module, a ZIGBEE module, a BLE module, an NB-IOT module, a Z-wave module, a radio frequency module, and the like. In another embodiment, the sensor is selected from the group consisting of radio frequency sensor, ultrasonic sensor, sonic sensor, and motion sensor.
Advantageously, the processing unit (120) can be trained to detect a variety of targets such as humans and animals. Additionally, the scope (100) may include an interface to facilitate a user to bias the trained neural network model to facilitate detection of only one of the types of target, for example, detection of animals only. The IOT communication module facilitates automatic identification and locking on the target.
The present disclosure also envisages a rifle (140) which comprises the machine-vision enabled tactical scope (100), of the present disclosure, mounted thereon. The scope (100) facilitates a user to spot the target easily over a large range of distance. The scope (100) further assists the user to spot the target in a crowd. The scope (100) is configured in such a way that it is light in weight, which enables the operator to wield the rifle (140) with ease. Further, the scope (100) can be mounted on any existing rifle (140), more specifically, by being integrated with traditional scopes or can act independently. In one embodiment, the scope (100) can be mounted on robots for autonomous locomotion and surveillance.
The foregoing description of the embodiments has been provided for purposes of illustration and not intended to limit the scope of the present disclosure. Individual components of a particular embodiment are generally not limited to that particular embodiment, but, are interchangeable. Such variations are not to be regarded as a departure from the present disclosure, and all such modifications are considered to be within the scope of the present disclosure.
TECHNICAL ADVANCEMENTS
The present disclosure described herein above has several technical advantages including, but not limited to, the realization of a machine-vision enabled tactical scope which:
• does not depend on the expertise of an operator to identify a target;
• ensures proper camera alignment with respect to the line of sight to facilitate accurate aiming; and
• is light in weight and can be easily mounted on equipments such as rifles;
• can automatically identify and lock on a target;
• is modular and can act independently or be integrated with the traditional scopes; and
• can be mounted on robots for autonomous locomotion and surveillance.
The embodiments herein and the various features and advantageous details thereof are explained with reference to the non-limiting embodiments in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The foregoing description of the specific embodiments so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the embodiments as described herein.
The use of the expression “at least” or “at least one” suggests the use of one or more elements or ingredients or quantities, as the use may be in the embodiment of the disclosure to achieve one or more of the desired objects or results.
Any discussion of devices, articles or the like that has been included in this specification is solely for the purpose of providing a context for the disclosure. It is not to be taken as an admission that any or all of these matters form a part of the prior art base or were common general knowledge in the field relevant to the disclosure as it existed anywhere before the priority date of this application.
While considerable emphasis has been placed herein on the components and component parts of the preferred embodiments, it will be appreciated that many embodiments can be made and that many changes can be made in the preferred embodiments without departing from the principles of the disclosure. These and other changes in the preferred embodiment as well as other embodiments of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation