Sign In to Follow Application
View All Documents & Correspondence

System And Method Of Providing Augmented Reality Guided Real Time Photographic Assistance Using Deep Learning And Cognitive Model

Abstract: Disclosed is a system comprising an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene; a web-portal system configured to acquire image capturing guideline, and a server arrangement communicatively coupled to the imaging unit of the mobile communication device, and the web-portal system, the server arrangement configured to implement artificial intelligent algorithm to analyse the captured information to, identify entities in the scene, a preliminary azimuthal value of the photographic subject with respect to the imaging unit, and a luminous value of the scene, acquire values of plurality of photographic parameters of the imaging unit, analyse the preliminary azimuthal value, the values of each of the photographic parameter, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and generate a augmented reality graphical user interface of the scene in real-time using the image capturing guideline.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
24 December 2019
Publication Number
03/2020
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
iprdocketing@sagaciousresearch.com
Parent Application

Applicants

SINANI SERVICES PRIVATE LIMITED
618 Bestech Business Tower, Sector 48, Sohna Road, Gurgaon 122018, India (IN)

Inventors

1. KIRTI, Abhishek
177, Pocket B&C, Sector-A, Vasant Kunj, New Delhi - 110070, India (IN)
2. BUDHIRAJA, Vicky
S-61, S.F., Uppal Southend, Sohna Road, sector 49, Gurgaon 122018, India (IN)
3. SINHA, Nishka
C1201 La Lagune Apartments, Golf Course Road, Sector 54, Gurgaon 122011, India (IN)
4. SINHA, Siddharth Prakash
C1201 La Lagune Apartments, Golf Course Road, Sector 54, Gurgaon 122011, India (IN)

Specification

Technical Field
The present disclosure relates generally to artificial intelligence aided photography including using deep learning and cognitive approach; and more specifically, to system implementing artificial intelligent algorithm for providing photographic assistance in real¬time using augmented reality graphical user interface. Furthermore, the present disclosure relates to method implementing artificial intelligent algorithm of providing photographic assistance in real¬time using augmented reality graphical user interface, for example using aforementioned system.
Background
Digital photography is of great significance in ecommerce such as online retailing. Vendors in online retailing upload digital photographs of products or services to their own or a third party online platform facilitating the online retailing. Further, consumers view the uploaded digital photographs to get the look and feel for the products or services.
Typically, vendors use mobile communication device, such as smart phones or PDAs to manage online retailing. Further, with the development of support for sophisticated graphical user interfaces

in mobile communication device, vendors have started using the mobile communication device to capture and upload the products on the online platform.
However, such use of mobile communication device for conventional image capturing of products or services for being uploaded to online platforms inherits various problems. Firstly, the use of conventional image capturing techniques using conventional mobile communication device involves an inherent problem of capturing non-optimal images. Generally, the online platform recommends parameters for an image of a product to abide by. However, the conventional mobile communication device and the vendors that are capturing the pictures do not understand or recognise the recommended parameters. Thus the inherent problem of capturing non-optimal image persists in the use of conventional mobile communication device. Secondly, the conventional mobile communication devices are not configured to understand any given scene to identify a photographic subject within other arbitrary entities in a scene as per the recommended parameters as mentioned by the online selling platform. Moreover, the conventional mobile communication devices are not configured to intelligently suggest rearrangement of the photographic subject and other arbitrary entities in a scene as per the recommended parameters of the online platform.
Therefore, in light of the foregoing discussion, there exists a need to overcome the aforementioned drawbacks associated with the conventional image capturing of products.

SUMMARY
The present disclosure seeks to provide a system for providing photographic assistance in real-time using augmented reality graphical user interface. The present disclosure also seeks to provide a method of providing photographic assistance in real-time using augmented reality graphical user interface. The present disclosure seeks to provide an solution to the existing problems associated with the conventional image capturing of products. An aim of the present disclosure is to provide a solution that at least partially overcomes the problems encountered in the prior art, and provides an easy-to-implement and user-friendly system and method of providing photographic assistance in real-time using augmented reality graphical user interface.
In a first aspect, an embodiment of the present disclosure provides a system comprising:
- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is

configured to implement artificial intelligent algorithm to analyse the captured information to,
identify entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline,
identify a preliminary azimuthal value of the photographic subject with respect to the imaging unit, and a luminous value of the scene,
acquire values of plurality of photographic parameters of the imaging unit,
analyse the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generate a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.
In a second aspect, an embodiment of the present disclosure provides a method implemented via a system comprising:

- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,
identifying entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline,
identifying a preliminary azimuthal value of the photographic subject with respect to the imaging unit and luminous value of the scene,
acquiring values of plurality of photographic parameters of the imaging unit,
analysing the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generating a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of

the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.
In a third aspect, an embodiment of the present disclosure provides a computer readable medium containing program instructions for execution on a computer system which when executed by a computer, causes the computer to perform a method, wherein the method implemented via a system comprising:
- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,
identifying entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline,

identifying a preliminary azimuthal value of the photographic subject with respect to the imaging unit and luminous value of the scene,
acquiring values of plurality of photographic parameters of the imaging unit,
analysing the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generating a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline
Embodiments of the present disclosure substantially eliminate or at least partially address the aforementioned problems in the prior art, and enables to determine a region of interest of a user in a virtual environment.
Additional aspects, advantages, features and objects of the present disclosure would be made apparent from the drawings and the

detailed description of the illustrative embodiments construed in conjunction with the appended claims that follow.
It will be appreciated that features of the present disclosure are susceptible to being combined in various combinations without departing from the scope of the present disclosure as defined by the appended claims.
BRIEF DESCRIPTION OF THE DRAWINGS
The summary above, as well as the following detailed description of illustrative embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the present disclosure, exemplary constructions of the disclosure are shown in the drawings. However, the present disclosure is not limited to specific methods and instrumentalities disclosed herein. Moreover, those skilled in the art will understand that the drawings are not to scale. Wherever possible, like elements have been indicated by identical numbers.
Embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
FIG. 1 is a block diagram of a system for providing photographic assistance in real-time using augmented reality graphical user interface, in accordance with an embodiment of the present disclosure;

FIG. 2 is a schematic illustration of an exemplary environment for implementing the system of FIG. 1, in accordance with an embodiment of the present disclosure;
FIG. 3 is an illustration of steps of a method of providing photographic assistance in real-time using augmented reality graphical user interface, in accordance with an embodiment of the present disclosure.
In the accompanying drawings, an underlined number is employed to represent an item over which the underlined number is positioned or an item to which the underlined number is adjacent. A non-underlined number relates to an item identified by a line linking the non-underlined number to the item. When a number is non-underlined and accompanied by an associated arrow, the non-underlined number is used to identify a general item at which the arrow is pointing.
DETAILED DESCRIPTION OF EMBODIMENTS
The following detailed description illustrates embodiments of the present disclosure and ways in which they can be implemented. Although some modes of carrying out the present disclosure have been disclosed, those skilled in the art would recognize that other embodiments for carrying out or practicing the present disclosure are also possible.
For purposes of promoting and understanding the principles of the invention, reference will now be made to the embodiments illustrated in the drawings and specific language will be used to

describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. The invention includes any alterations and further modifications in the illustrated devices and described methods and further applications of the principles of the invention that would normally occur to one skilled in the art to which the invention relates. Furthermore, in the following detailed description, numeric values and ranges are provided for various aspects of the implementations described. These values and ranges are to be treated as an example only, and are not intended to limit the scope of the claims. In addition, a number of materials are identified as suitable for various facets of the implementations. These materials are to be treated as exemplary, and are not intended to limit the scope of the invention.
In a first aspect, an embodiment of the present disclosure provides a system comprising:
- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,

identify entities in the scene, wherein the identified entities in
the scene includes at least one photographic subject
determined based on the image capturing guideline,
identify a preliminary azimuthal value of the photographic
subject with respect to the imaging unit, and a luminous
value of the scene.
acquire values of plurality of photographic parameters of the
imaging unit,
analyse the preliminary azimuthal value, the values of each of
the photographic parameter of the plurality of photographic
parameters, and the luminous value of the scene with respect
to the image capturing guideline to determine deviations
values therein, and
generate a augmented reality graphical user interface of the
scene in real-time, to be displayed on a display of the mobile
communication device, wherein the augmented reality
graphical user interface includes plurality of instruction that
when implemented normalizes the deviation values to achieve
optimal conditions for capturing the at least one image of the
at least one photographic subject as per the image capturing
guideline.
In another aspect, an embodiment of the present disclosure provides a method implemented via a system comprising:
- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;

- a web-portal system configured to acquire image capturing
guideline, wherein the image capturing guideline includes
subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,
identifying entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline, identifying a preliminary azimuthal value of the photographic subject with respect to the imaging unit, and a luminous value of the scene.
acquiring values of plurality of photographic parameters of the imaging unit,
analysing the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generating a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the

at least one photographic subject as per the image capturing guideline.
In yet another aspect, an embodiment of the present disclosure provides a computer readable medium containing program instructions for execution on a computer system which when executed by a computer, causes the computer to perform a method, wherein the method implemented via a system comprising:
- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,
identifying entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline, identifying a preliminary azimuthal value of the photographic subject with respect to the imaging unit, and a luminous value of the scene.
acquiring values of plurality of photographic parameters of the imaging unit,

analysing the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generating a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.
Referring to FIG. 1, there is shown a block diagram of a system 100 for providing photographic assistance in real-time using augmented reality graphical user interface, in accordance with an embodiment of the present disclosure. The system 100 allows a user of the system 100, such as a novice photographer capturing an image to be uploaded to a web-based platform (for example, an ecommerce platform), to take images having a higher quality as compared to a quality that would be possessed by an image taken by another user having comparable skill (but is not a user of the system 100). The system 100 enables this by providing assistance to the user of the system 100, such as, changes that can be made by the user in terms of: one of more factors associated with an imaging unit used by the user for capturing the image, changes to a

scene associated with a product that the user is capturing the image thereof and so forth (explained in detail herein below).
The system 100 comprises a mobile communication device 102. The mobile communication device 102 can be implemented as a mobile phone (such as a smartphone), a camera enabled Personal Digital Assistant (or PDA), a tablet computer, a laptop computer, and so forth. The mobile communication device 102 comprises an imaging unit 104 associated therewith. The imaging unit 104 of the mobile communication device 102 can be a camera that is implemented as part of the mobile communication device 102, such that the camera allows the user of the mobile communication device 102 to capture one or more images or videos. It will be appreciated that such an imaging unit 104 can be implemented as a camera capable of capturing standard-definition (or SD) images having intermediate image resolution (such as, a camera capable of capturing images having a resolution of 8 megapixels or lower) or high-definition (or HD) images having very high resolution (such as, a camera capable of capturing images having a resolution of 48 megapixels or higher). Consequently, the imaging unit 104 can be implemented in a single camera configuration (such as, a camera provided on a front or a rear of conventional smartphones), a triple camera configuration (such as, a camera provided on a rear of most modern smartphones) and the like. In an embodiment, the imaging unit 104 of a mobile communication device 102 can be implemented as a camera that can be communicatively coupled to a mobile communication device (such as a smartphone) via wired or wireless means (such as, via a Bluetooth communication network, a

WiFi network and so forth). In an exemplary embodiment, the imaging unit can be implemented as stereo camera.
The imaging unit 104 of the mobile communication device 102 is configured to capture information of a scene. The imaging unit 104 is configured to capture the information of the scene as an image, such that the information of the scene corresponds to presence of one or more objects within the scene, presence of one or more persons within the scene, lighting conditions within the scene and so forth.
In an embodiment, the captured information of a scene includes plurality of point cloud data for identifying entities in the scene. For example, the imaging unit 104 is configured to acquire point cloud data associated with one or more entities present within the scene. In such example, the imaging unit 104 may be configured to capture depth perception of the scene and correspondingly form the point cloud data associated with the one or more entities in the scene.
The system 100 comprises a web-portal system 106 configured to acquire image capturing guideline. Throughout the present disclosure, the term ^web-portal system' relates to a physical embodiment of a web portal, where the term "web portal" refers to a web site or service, e.g., as may be viewed in the form of a web page, that offers a broad array of resources and services to users via an electronic communication element, e.g., via the Internet. Moreover, the web-portal system 106 may include both hardware and software components, where the hardware components may

take the form of one or more platforms, e.g., in the form of servers, such that the functional elements, i.e., those elements of the system that carry out specific tasks (such as managing input and output of information, processing information, etc.) of the system may be carried out by the execution of software applications on and across the one or more computer platforms represented of the system. Furthermore, the one or more platforms present in the subject systems may be any type of known computer platform or a type to be developed in the future, although they typically will be of a class of computer.
Optionally, web-portal system 106 may also be a main-frame computer, a work station, or other computer type. They may be connected via any known or future type of cabling or other communication system including wireless systems, either networked or otherwise. They may be co-located or they may be physically separated. Various operating systems may be employed on any of the computer platforms, possibly depending on the type and/or make of computer platform chosen. Appropriate operating systems include Windows, macOS, iOS, Android, Linux, OS/400, Tru64 Unix, SGI IRIX, Siemens Reliant Unix, and the likes.
The web-portal system 106 is used to acquiring image capturing guidelines. The image capturing guideline refers to information that provides instruction as to how images are to be captured. Optionally, the image capturing guideline can be in the form of a message or indicating lines so that the user/photographer can capture an image according to the image capturing guideline. It will

be appreciated that the preferred image capturing guideline includes plurality of setting and/or shoot setup.
In an example, the image capturing guideline will correspond to a clear depiction of the products within the images, such as, the image being representative of only one product; the image having a clear background (such as a white or dark background) devoid of unwanted elements not associated with the product; the image clearly depicting a name or other information (such as a manufacturing date, an expiry date, ingredients used for manufacturing the product and so forth) and the like. Furthermore, the image capturing guideline may comprise requirements associated with omission of any forms of unpleasant images, such as a poster of a sinking ship in a room.
Furthermore, the image capturing guideline can include subscriber inputs. In such instance, the subscriber is a service provider that enables commercialization of products. In an example, the third part service may be Amazon®, Flipkart®, Airbnb® and the like. The subscriber inputs refers to the information that includes instruction for image capturing. Optionally, the subscriber inputs define the scope of capturing an image. Optionally, the image capturing guideline is entirely subscriber inputs.
In an embodiment the web-portal system 106 may be configured to provide a means for communication between the subscriber (such as Amazon®, Flipkart®, Airbnb® and the like) and a facilitator of the product of which an image is to be captured. In an example, the mean may be a chat window, a telephonic connection, a VoIP

connection and the like. In such instance, the image capturing guideline may be instruction that the subscriber and the facilitator of the product agrees upon.
In one embodiment, the image capturing guideline comprises at least one of information related to a subject for which an image is to be captured; a preferred value for each property of plurality of properties of the image is to be captured of the subject; a preferred value range for each parameter of plurality of photographic parameter; and a preferred value range for azimuthal value of the imaging unit, and luminous orientation and value of the scene. The image capturing guideline provides the specification that the image of the product requires to possess. In an example, the image capturing guideline comprises information related to a subject for which an image is to be captured. In such an example, the image is required to clearly depict the subject associated with a specific category, such as, a particular object (in case the product relates to an item), a room (in case the product relates to rental space) and so forth. In another example, the image capturing guideline may comprise the preferred value for each property of plurality of properties of the image is to be captured of the subject, such as, white balance of the image, exposure in the image, brightness in the image, black point levels in the image, dark areas in the image, physical dimensions of the photographic subject or a shape of the photographic subject and so forth. In yet another example, the image capturing guideline may comprise the preferred value range for each parameter of plurality of photographic parameter, such as capability of the imaging unit 104 associated with a minimum

threshold resolution of the images that can be captured by the imaging unit 104. In yet another example, the image capturing guideline comprises a preferred value range for azimuthal value of the imaging unit, and luminous orientation and value of the scene. In such an example, the image capturing guideline may require each image to comprise the photographic subject to be located at a centre of a field-of-view (or FoV) of the imaging unit 104 (such that, the image capturing guideline specifies a preferred value range for azimuthal value of the imaging unit), or the captured image to have a brightness and/or intensity between a lower threshold value (such as, a lower threshold pixel value) and a higher threshold value (such as, a higher threshold pixel value).
In one embodiment, the web-portal system 106 is implemented as any physical arrangement of electric and electronic components (such as, hardware having software installed thereon) capable of performing operations, such as a dedicated processor, a virtual processor, or a virtual device. For example, the web-portal system 106 can be implemented as a physical processor or a virtual processor.
Furthermore, the system 100 comprises a server arrangement 108 communicatively coupled to the imaging unit 104 of the mobile communication device 102 and the web-portal system 106. The server arrangement 108 can be implemented as an arrangement of hardware and software that is configured to process data therein. The server arrangement 108 can be implemented in a centralized architecture, a decentralized architecture, a distributed architecture

and the like, without departing from a scope of the present disclosure. In one example, the server arrangement 108 is implemented as a cloud-based server. Moreover, the server arrangement 108 is communicatively coupled to the imaging unit 104 of the mobile communication device 102, such as, the server arrangement 108 can be communicatively coupled to the mobile communication device 102 via a wired or wireless communication network (including, but not limited to, a WiFi network, a 3G communication network, a 4G communication network, internet and so forth). Consequently, the server arrangement 108 is configured to be communicatively coupled to the imaging unit 104 via the mobile communication device 102, such that the server arrangement 108 can receive and/or transmit information from/to the imaging unit 104 via the mobile communication device 102. Alternatively, when the web-portal system 106 is implemented as the physical arrangement of electric and electronic components (such as, hardware having software installed thereon) capable of performing operations, the server arrangement 108 is communicatively coupled to the web-portal system 106 via a wired or wireless communication network (including, but not limited to, a WiFi network, a 3G communication network, a 4G communication network, internet and so forth). The server arrangement 108 is configured to implement artificial intelligent algorithm to analyse the captured information. The artificial intelligent algorithm is implemented as a Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), or Multilayer Perceptron (MLP), such that the artificial intelligent algorithm is capable of performing analysis

of the information captured by the imaging unit 104 of the mobile communication device 102.
The server arrangement 108 is configured to implement the artificial intelligent algorithm to analyse the captured information to identify entities in the scene. The identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline. The at least one photographic subject can correspond to an object, a product, a room, a person, an animal, an environment and so forth that is a principal entity of interest within the scene. For example, at least one photographic subject may be a view of a living room, a model holding a product, a product on a table, a view of outdoors from a hotel room window, and so forth. The server arrangement 108 is configured to implement the artificial intelligent algorithm to analyse the captured information (such as, the image captured by the imaging unit 104) to identify such entities within the scene. It will be appreciated that the image capturing guideline provides the requirements associated with the entities within the image. Consequently, the artificial intelligent algorithm employs such requirements to identify the entities within the image. In an example, when the images of products captured by the imaging unit 104, the server arrangement 108 is configured to employ the image capturing guideline associated with products, for identifying the entity (or the product) within the image. It will be appreciated that the at least one photographic subject is the products.

In an embodiment, when the imaging unit 104 is configured capture point cloud data associated with entities within a scene, the artificial intelligent algorithm is configured to process the point cloud data to identify the entities within the scene. Furthermore, the artificial intelligent algorithm is configured to employ any algorithm known in the art for recognition, segmentation and subsequent classification of the entities within the scene.
In one embodiment, the artificial intelligent algorithm is further configured to implement composite imaging to generate the at least one image of the at least one photographic subject. It will be appreciated that a FoV of the imaging unit 104 used for capturing the image may be such that an entire photographic subject may not be captured within a single image. Consequently, the photographic subject may be captured within a plurality of images. The artificial intelligent algorithm is configured to implement composite imaging to "stitch" each of the plurality of images together to obtain the image of the object within the scene. Thus, the artificial intelligent algorithm is configured to automatically identify presence of the photographic subject within the plurality of images, and implement composite imaging on the plurality of images to generate the image having the photographic subject therein.
In one embodiment, the artificial intelligent algorithm is trained to analyse images to identify entities in various scene by employing training data. Such training data can comprise a plurality of sets of images corresponding to different image capturing guidelines. Correspondingly, the artificial intelligent algorithm is configured to

employ machine learning (such as deep learning) to learn to analyse the images to identify entities therein. For example, a plurality of images (such as, thousand images, hundred-thousand images or a million images) comprising kitchenware products therein, can be employed to train the artificial intelligent algorithm to analyse images to identify presence of kitchenware products within images. In such instance, i.e. during training mode of the artificial intelligent algorithm, a dataset (i.e. plurality of images comprising kitchenware products such as a fry pan) is passed into the neural network, and the neural network is trained to accurately kitchenware products in midst of other item, such as a toys, stationary, electronic gadgets and the like. In another instance, i.e. during an inference mode, an image (such as image of a toy, a stationary, or an electronic gadget) that is not part of the dataset (i.e. plurality of images comprising kitchenware products such as the fry pan) is passed into the neural network. The neural network automatically identifies an object of interest (such as a fry pan) and automatically draws a box around the identified object of interest. The box drawn around the identified object of interest corresponds to the smallest possible bounding box around the object of interest. The neural network comprises a convolution-nonlinearity step and a recurrent step. In certain embodiments, the dataset may comprise a plurality of images with known identified objects of interests. According to various embodiments, parameters in the neural network may be updated using a stochastic gradient descent during the training mode. In the inference mode of certain embodiments, the neural network will automatically output exactly the same

number of boxes as the number of identifiable objects of interest. It will be appreciated that a higher a number of qualitative images employed to train the artificial intelligent algorithm, a higher is an accuracy associated with identification of entities in images. In an embodiment, the artificial intelligent algorithm is configured to be trained to identify entities in various scenes on an on-going basis (such as, during operation of the system 100). Consequently, the accuracy of the artificial intelligent algorithm associated with identification of entities, improves based correspondingly on an amount of operation of the system 100.
In one embodiment, the artificial intelligent algorithm further employs cognizance. The artificial intelligent algorithm may include cognitive model that utilizes deep learning techniques to calculate predictions relative to critical incidents. Optionally, the cognitive model uses an approach that includes a neuromorphic algorithm that can be trained with supervised and unsupervised methods. In a preferred embodiment, cognitive model utilizes associated independent stacked Restricted Boltzmann Machines (RBM) with both supervised and unsupervised methods. This technique is advantageous over conventional techniques because the training is very fast, thus saving time and computing resources. The technique is also advantageous because it requires less training exemplars. In addition, the technique is advantageous because it is an effective method for correlating different types of features from data. The technique is also advantageous because it enables fine tuning of unsupervised training with supervised training results. Cognitive model assesses a situation by considering different factors which

may be online or offline and available as structured or unstructured data and applying relative weights. A subject matter expert feeds a training set of data to cognitive model. In one embodiment, the data is labeled as "critical" or "not critical," enabling cognitive model to learn what constitutes a critical incident or situation during offline training, prior to use. The training set of data includes examples of a plurality of incidents, likely outcomes of the incidents, and associated risks. Cognitive model is trained to recognize the difference between, for example, a weather incident and a traffic incident. In addition, cognitive model is trained to recognize the severity of an incident, such as the difference between a thunderstorm and a tornado. Cognitive model learns from the training set of data to distinguish between likely and unlikely scenarios. Once trained, cognitive model can assign a weight or probability to the occurrence of a critical incident based on the data aggregated by incident predictor program.
Furthermore, the server arrangement 108 is configured to implement the artificial intelligent algorithm to analyse the captured information to identify a preliminary azimuthal value of the photographic subject with respect to the imaging unit and a luminous value of the scene. The term "preliminary azimuthal value" as used throughout the present disclosure, relates to a physical location and/or an orientation of the photographic subject with respect to the imaging unit (or the imaging unit 104). Such a preliminary azimuthal value can correspond to x-, y- and z- values of the photographic subject within the scene in accordance with a Cartesian coordinate system, when a position of the imaging unit

104 is considered as origin (associated with a value of 0(0,0,0)). Furthermore, the preliminary azimuthal value corresponds to a rotation of the photographic subject with respect to a horizontal direction (and/or a vertical direction) within the scene. Moreover, the artificial intelligent algorithm is configured to identify the luminous value of the scene. The term "luminous value" as used throughout the present disclosure, relates to a cumulative value of parameters such as brightness, intensity and so forth associated with a plurality of pixels within the image. It will be appreciated that different pixels within an image may be associated with different values of brightness, intensity and so forth. Consequently, the artificial intelligent algorithm is configured to calculate the cumulative value of brightness values of all pixels (or intensity values of all pixels and similarly, cumulative values of other parameters) within the image to identify the luminous value of the scene.
The server arrangement 108 is configured to implement artificial intelligent algorithm to analyse the captured information to acquire values of plurality of photographic parameters of the imaging unit 104. The artificial intelligent algorithm is configured to analyse the image to acquire values of plurality of photographic parameters of the imaging unit 104, such that, the plurality of photographic parameters of the imaging unit 104 relate to a capability of the imaging unit 104 of capturing images. As mentioned herein above, such a capability of the imaging unit 104 relates to capturing SD or HD images, capturing images having low-resolution, intermediate resolution, high-resolution or ultra-high-resolution images and so

forth. Such photographic parameters may be specified within the image capturing guideline as a minimum threshold specification that the imaging unit 104 used for capturing the image is required to possess. Alternatively, the server arrangement 108, being communicatively coupled with the imaging unit 104 via the mobile communication device 102, can be configured to acquire the photographic parameters of the imaging unit 104 from device information (such that the device information specifies a configuration of various components of the mobile communication device 102, including the imaging unit 104) associated with the mobile communication device 102.
The server arrangement 108 is configured to implement artificial intelligent algorithm to analyse the captured information to analyse the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein. The artificial intelligent algorithm is configured to compare the identified preliminary azimuthal value of the photographic subject within the scene, with the preliminary azimuthal value as specified by the image capturing guideline. For example, the image capturing guideline specifies that the photographic subject is required to be present at a centre of the captured image and the orientation of the photographic subject is required to be parallel (such as, 0°) with respect to a vertical direction within the scene. Correspondingly, the artificial intelligent algorithm is configured to determine the deviations in the azimuthal value in terms of a distance of the

photographic subject from abscissa, ordinate and applicate (or in x-, y- and z- directions), as well as a rotation of the photographic subject from the vertical direction (such as, with respect to a vertical axis within the scene). Furthermore, the artificial intelligent algorithm is configured to determine deviations in the luminous value of the scene as a difference in brightness, intensity and so forth of the captured image with respect to the image capturing guideline. Moreover, when the image capturing guideline specifies the photographic parameter of the imaging unit 104, the artificial intelligent algorithm is configured to compare the photographic parameter of the imaging unit 104 used for capturing the image with the minimum threshold specification of the imaging unit 104 as specified within the image capturing guideline to determine deviations associated with use of the imaging unit 104.
The server arrangement 108 is configured to implement artificial intelligent algorithm to analyse the captured information to generate an augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device 102. The augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline. The server arrangement 108, is configured to transmit information corresponding to the determined deviations of the captured image with respect to the image capturing guideline, to be presented on the display of mobile communication device 102, such that

countermeasures can be taken to overcome the deviations. Such information is presented as the augmented reality graphical user interface that is overlaid on the scene in real-time, such that the augmented reality graphical user interface comprises actual directions to be provided to the user to overcome the deviations. In an embodiment, the plurality of instruction of capturing at least one image includes at least one of ordering each property of the plurality of properties of the image with respect to corresponding image capturing guideline; ordering each parameter of plurality of photographic parameter with respect to corresponding image capturing guideline; ordering the azimuthal value of the imaging unit and the luminous orientation and value of the scene, and repositioning the at least one auxiliary entity with the scene. For example, when the artificial intelligent algorithm determines deviations in the primary azimuthal value of the photographic subject within the scene, the augmented reality graphical user interface can present a location that the photographic subject is required to be placed within the scene, to overcome the deviations. In such an example, the augmented reality graphical user interface can present a chevron pointing to an ideal location for placing the photographic subject and an indicator text associated therewith, such as "place object here". In another example, when the intelligent algorithm determines deviations in the primary azimuthal value associated with orientation of the photographic subject within the scene, the augmented reality graphical user interface can present a box associated with an ideal orientation of the photographic subject and an indicator text associated therewith,

such as "fit object within box". In yet another example, when the luminous value associated with intensity of the captured image is determined to be more than that specified by the image capturing guideline, the artificial intelligent algorithm is configured to determine a light source associated with causing high-intensity light to be emitted within the scene. In such an example, the augmented reality graphical user interface depicts a pointer to the light source associated with causing emission of the high-intensity light and an indicator text, such as "dim light source" or "close window" (such as, when the light is emitted into the scene from a window). In yet another example, when the artificial intelligent algorithm determines that the photographic parameter of the imaging unit 104 used for capturing the image is less than the minimum threshold specification, the augmented reality graphical user interface is configured to display an indicator text thereon, such as, "use high resolution camera". In an embodiment, entities in the scene further include at least one auxiliary entity. For example, the scene can include unwanted objects (such as, one or more objects, persons, animals and so forth) apart from the photographic subject. In such an example, the artificial intelligent algorithm is configured to determine presence of such at least one auxiliary entity within the scene and provide the instruction to reposition the at least one auxiliary entity within the scene. Consequently, the augmented reality graphical user interface is configured to display one or more pointers and associated indicator text, such as, "remove people/animals from scene" or "remove clutter from scene".

Referring to FIG. 2, there is shown an exemplary environment 200 for implementing the system 100 of FIG. 1, in accordance with an embodiment of the present disclosure. The environment comprises the mobile communication device 202 having an imaging unit 204 associated therewith. The imaging unit 104 is used to capture an image of a scene. Furthermore, the mobile communication device 202 is communicatively coupled to a web-portal system 206 and a server arrangement 208 (via a cloud-based communication network 220). The captured image of the scene is transmitted by the imaging unit 204 via the mobile communication device 202 to the server arrangement 208. Furthermore, the web-portal system 204 is configured to acquire imaging capturing guideline and the server arrangement 208 is configured to analyse the captured image with respect to the image capturing guideline, via an artificial intelligent algorithm implemented on the server arrangement 208. Subsequently, upon analysis of the image, the server arrangement 208 is configured to generate an augmented reality graphical user interface 224 of the scene in real-time on a display 226 of the mobile communication device 202, such that the augmented reality graphical user interface 224 provides a plurality of instructions to a user of the mobile communication device 202. Such instructions comprise text indicators 228 - 232 suggesting that "the door should be closed 228, avoid clutters in the room 230, remove all the family pictures 232". Furthermore, other instructions are provided with respect to light emitted by light source 218 within the scene (such as, for overcoming deviations with respect to luminous value of the scene), light emitted from door 210 having

curtains 216 (such as, to close the door to limit emission of natural light therefrom), to change a preliminary azimuthal value associated with vase 212 and/or table 214 and so forth. Optionally, the mobile communication device 202 can be mounted on a set of gimbals 222 that has at least three degrees of freedom. Consequently, the server arrangement 208 is configured to transmit control instructions to the controller to allow the controller to change a location and/or orientation of the mobile communication device 202 with respect to the scene, to overcome deviations between images captures by the imaging unit 204 of the mobile communication device 202 and the imaging capturing guideline. In an embodiment, the server arrangement 208 is communicatively coupled to one or more IoT devices (such as, the light source 218).
Referring to FIG. 3, there is shown steps of a method 300 of providing photographic assistance in real-time using augmented reality graphical user interface, in accordance with an embodiment of the present disclosure. The method 300 is implemented via a system (such as the system 100 of FIG. 1) comprising an imaging unit of a mobile communication device. The imaging unit is configured to capture information of a scene. Furthermore, the system comprises a web-portal system configured to acquire image capturing guideline. The image capturing guideline includes subscriber inputs. Moreover, the system comprises a server arrangement communicatively coupled to the imaging unit of the mobile communication device, and the web-portal system. The server arrangement is configured to implement artificial intelligent

algorithm to analyse the captured information. At a step 302, entities in the scene are identified. The identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline. At a step 304, a preliminary azimuthal value of the photographic subject with respect to the imaging unit and a luminous value of the scene are identified. At a step 306, values of plurality of photographic parameters of the imaging unit are acquired. At a step 308, the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters and the luminous value of the scene are analysed with respect to the image capturing guideline to determine deviations values therein. At a step 310, a augmented reality graphical user interface of the scene is generated in real¬time, to be displayed on a display of the mobile communication device. The augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.
It is contemplated that one or more steps can be performed separately or together, at a same time or at different times, at a same location or at different locations, and/or in the illustrated order or out of order. Furthermore, it is contemplated that one or more steps can be optionally omitted. For example, the image capturing guidelines includes at least one of information related to a subject for which an image is to be captured; a preferred value for each property of plurality of properties of the image is to be

captured of the subject; a preferred value range for each parameter of plurality of photographic parameter; and a preferred value range for azimuthal value of the imaging unit, and luminous orientation and value of the scene. In another example, the plurality of instruction for capturing at least one image includes at least one of ordering each property of the plurality of properties of the image with respect to corresponding image capturing guideline, ordering each parameter of plurality of photographic parameter with respect to corresponding image capturing guideline, ordering the azimuthal value of the imaging unit and the luminous orientation and value of the scene, and repositioning the at least one auxiliary entity with the scene.
Furthermore, disclosed is a computer readable medium containing program instructions for execution on a computer system which when executed by a computer, causes the computer to perform a method. The method implemented via a system comprising an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene; a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and a server arrangement communicatively coupled to the imaging unit of the mobile communication device, and the web-portal system. The server arrangement is configured to implement artificial intelligent algorithm to analyse the captured information to identify entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline, identify a preliminary azimuthal

value of the photographic subject with respect to the imaging unit, and a luminous value of the scene, acquire values of plurality of photographic parameters of the imaging unit, analyse the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and generate a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.

Claim:
1. A system comprising:
- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,
identify entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline,
identify a preliminary azimuthal value of the photographic subject with respect to the imaging unit, and a luminous value of the scene,
acquire values of plurality of photographic parameters of the imaging unit,
analyse the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the

scene with respect to the image capturing guideline to determine deviations values therein, and generate a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.
2. The system as claimed in claim 1, wherein the image capturing
guideline comprises at least one of:
- information related to a subject for which an image is to be captured;
- a preferred value for each property of plurality of properties of the image is to be captured of the subject;
- a preferred value range for each parameter of plurality of photographic parameter; and
- a preferred value range for:
azimuthal value of the imaging unit, and luminous orientation and value of the scene.
3. The system as claimed in claim 1, wherein the entities in the scene further includes at least one auxiliary entity.
4. The system as claimed in any one of the preceding claims, wherein the plurality of instruction for capturing at least one image includes at least one of:

- ordering each property of the plurality of properties of the
image with respect to corresponding image capturing
guideline,
- ordering each parameter of plurality of photographic
parameter with respect to corresponding image capturing
guideline,
- ordering the azimuthal value of the imaging unit and the luminous orientation and value of the scene, and
- repositioning the at least one auxiliary entity within the scene.
5. The system as claimed in claim 1, wherein the captured
information of a scene includes plurality of point cloud data for
identifying entities in the scene.
6. The system as claimed in claim 1, wherein the artificial intelligent algorithm is further configured to implement composite imaging to generate the at least one image of the at least one photographic subject.
7. A method implemented via a system comprising:

- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is

configured to implement artificial intelligent algorithm to analyse the captured information to,
identifying entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline,
identifying a preliminary azimuthal value of the photographic subject with respect to the imaging unit and luminous value of the scene,
acquiring values of plurality of photographic parameters of the imaging unit,
analysing the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generating a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of the mobile communication device, wherein the augmented reality graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline. 8. The method as claimed in claim 7, wherein the image capturing guideline includes at least one of:

- information related to a subject for which an image is to be captured;
- a preferred value for each property of plurality of properties of the image is to be captured of the subject;
- a preferred value range for each parameter of plurality of photographic parameter; and
- a preferred value range for:
azimuthal value of the imaging unit, and luminous orientation and value of the scene.
9. The method as claimed in the claim 7 and 8, wherein the
plurality of instruction for capturing at least one image includes at
least one of:
- ordering each property of the plurality of properties of the
image with respect to corresponding image capturing
guideline,
- ordering each parameter of plurality of photographic
parameter with respect to corresponding image capturing
guideline,
- ordering the azimuthal value of the imaging unit and the luminous orientation and value of the scene, and
- repositioning the at least one auxiliary entity within the scene.
10. A computer readable medium containing program instructions
for execution on a computer system which when executed by a
computer, causes the computer to perform a method, wherein the
method implemented via a system comprising:

- an imaging unit of a mobile communication device, wherein the imaging unit is configured to capture information of a scene;
- a web-portal system configured to acquire image capturing guideline, wherein the image capturing guideline includes subscriber inputs; and
- a server arrangement communicatively coupled to the
imaging unit of the mobile communication device, and the
web-portal system, wherein the server arrangement is
configured to implement artificial intelligent algorithm to
analyse the captured information to,
identifying entities in the scene, wherein the identified entities in the scene includes at least one photographic subject determined based on the image capturing guideline,
identifying a preliminary azimuthal value of the photographic subject with respect to the imaging unit and luminous value of the scene,
acquiring values of plurality of photographic parameters of the imaging unit,
analysing the preliminary azimuthal value, the values of each of the photographic parameter of the plurality of photographic parameters, and the luminous value of the scene with respect to the image capturing guideline to determine deviations values therein, and
generating a augmented reality graphical user interface of the scene in real-time, to be displayed on a display of

the mobile communication device, wherein the graphical user interface includes plurality of instruction that when implemented normalizes the deviation values to achieve optimal conditions for capturing the at least one image of the at least one photographic subject as per the image capturing guideline.

Documents

Application Documents

# Name Date
1 201911053723-STATEMENT OF UNDERTAKING (FORM 3) [24-12-2019(online)].pdf 2019-12-24
2 201911053723-FORM FOR STARTUP [24-12-2019(online)].pdf 2019-12-24
3 201911053723-FORM FOR SMALL ENTITY(FORM-28) [24-12-2019(online)].pdf 2019-12-24
4 201911053723-FORM 1 [24-12-2019(online)].pdf 2019-12-24
5 201911053723-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [24-12-2019(online)].pdf 2019-12-24
6 201911053723-EVIDENCE FOR REGISTRATION UNDER SSI [24-12-2019(online)].pdf 2019-12-24
7 201911053723-DRAWINGS [24-12-2019(online)].pdf 2019-12-24
8 201911053723-DECLARATION OF INVENTORSHIP (FORM 5) [24-12-2019(online)].pdf 2019-12-24
9 201911053723-COMPLETE SPECIFICATION [24-12-2019(online)].pdf 2019-12-24
10 201911053723-STARTUP [10-01-2020(online)].pdf 2020-01-10
11 201911053723-FORM28 [10-01-2020(online)].pdf 2020-01-10
12 201911053723-FORM-9 [10-01-2020(online)].pdf 2020-01-10
13 201911053723-FORM 18A [10-01-2020(online)].pdf 2020-01-10
14 201911053723-FER.pdf 2020-01-22
15 abstract.jpg 2020-01-27
16 201911053723-Proof of Right [16-03-2020(online)].pdf 2020-03-16
17 201911053723-FORM-26 [16-03-2020(online)].pdf 2020-03-16

Search Strategy

1 2020-01-2111-33-26_21-01-2020.pdf