Abstract: The present disclosure relates to a method and system for automatically predicting a holding capacity of a region within a product. The method comprises of receiving a 3D image of the product comprising of a 3D image of the region and the first set of 3D images. Further the method comprises analysing and detecting an actual volume of the region and the set of items. Further, the method leads to identifying a set of target items and a second set of 3D images containing 3D images of target item(s). Further the method encompasses receiving a first user selection comprising of 3D image from the second set of 3D images. Further the method comprises automatically placing and automatically arranging the selected 3D image in the 3D image of the region, and automatically predicting the holding capacity of the region based on the automatic placement and the automatic arrangement. FIG. 1
Description: As attached in pdf , Claims: As attached in pdf
METHOD AND SYSTEM FOR AUTOMATICALLY PREDICTING A HOLDING CAPACITY OF A
REGION WITHIN A PRODUCT
TECHNICAL FIELD
This disclosure generally relates to the field of 3D computer graphics and augmented reality
and more particularly to a system and method for automatically predicting a holding capacity
of a region within a product using 3D image(s) of the product and item(s) that may be placed
in the region within the product, to enhance user experience.
BACKGROUND
The following description of related art is intended to provide background information
pertaining to the field of the disclosure. This section may include certain aspects of the art
that may be related to various features of the present disclosure. However, it should be
appreciated that this section be used only to enhance the understanding of the reader with
respect to the present disclosure, and not as admissions of prior art.
The new age of digitalization has brought marketplace into the homes of every user with a
smart phone or a computer with internet. The boom of 4G and 5G enabled users to order
every product of their utility from a convenient place and get it delivered within a few days.
This new empowerment of the user with the facility of comparing products of various brands
and the discounts available therein on a screen has completely changed the market space as
we see it today. Therefore, e‐commerce platforms are making many efforts to make the
experience of online shopping realistic and user friendly to further make online shopping
lucrative. Some of the efforts have been made to make online platforms more user friendly
for less tech savvy users by providing realistic pictures and audio search commands. Further,
there are efforts to make user experience of viewing a product online more realistic and
fashioned in a way to feel like a showroom type experience.
The user in online shopping may have apprehension with respect to realistic feel of a product,
even though there is a standard practice to provide dimensions of the product in the
description of the product. This is because, generally users are not aware of how those dimensions like centimeters or inches may feel in real life outside of that screen. One way by
which the e‐commerce platforms have attempted to solve this problem is by providing a
reference next to a picture of the product. This reference is usually a human model of an
average built, either using that product or standing next to it to give a fair idea to the user
looking to buy that product. For instance, if a user is looking to buy a t‐shirt, the human model
can be represented wearing the t‐shirt. Another way of using human models is to make them
stand next to a product, such as a study table, or a cupboard, or sitting on a scooter to give a
more realistic feel to the user as to how this product may look like in reality. The biggest
disadvantage of using this method is that the appearance of humans generally varies and
therefore, an ideal human model may not be identified to represent the very person looking
to buy the product. Further, there are limitations to using this method, as there may be spaces
where it cannot be used, such as for predicting a holding capacity of boot space of a scooter
or hollow space of a cupboard and a refrigerator. It is not possible to use human models in
these spaces to give the user a realistic experience as to what is the holding capacity of these
boot spaces or hollow spaces and how well these can be utilized. Also, while comparing
different products on a digital screen in 2D, the user is unable to feel the differences between
hollow regions inside the product. In a very competitive market space, these small things can
be a deciding factor as to which product meets the needs of the user and which products the
user wants to purchase and in the absence of the realistic feel of hollow regions inside
products, the user may miss the showroom type experience.
Further, to provide users with a realistic experience of the hollow region inside the product,
one existing method used is to put items inside the hollow regions of the product and provide
its images to the users in online space. But there are two major problems associated with this
approach. First, it is a manual intensive process, involving acquiring different items and then
placing them inside every type of product individually and then clicking pictures of the same
and uploading them online. Second, the outcome of this approach is static, as the user is not
selecting the items to be kept inside the hollow region of the product according to his needs,
also the user is unable to move these items inside the hollow region of the product. Therefore,
this approach is not very successful in solving the problem and there is a need in the art to
provide a solution for automatically predicting a holding capacity of a region within a product
based on enabling a user to place and/or arrange a 3D image of each of one or more items inside a 3D image of the region, wherein the one or more items may be corresponding to the
user’s daily needs to give the user a realistic feel of the holding capacity of the region.
SUMMARY
This section is provided to introduce certain aspects and objects of the present disclosure in
a simplified form that are further described below in the detailed description. This summary
is not intended to identify the key features or the scope of the claimed subject matter.
Some of the objects of the present disclosure, which at least one embodiment disclosed
herein satisfies are listed herein. It is an object of the present disclosure to detect a holding
capacity of a region within a product. It is another object of the present disclosure to detect
a holding capacity of a region within a product virtually in an online setting. It is also an object
of the present disclosure to provide a 3D 360‐degree view of the product and the hollow
region within the product. It is another object of the present disclosure to provide 3D
projection of predetermined item(s) with a 360‐degree view. Further, an object of the present
disclosure is to provide automatic placement of 3D projection of the predetermined items
inside a virtual hollow region of the 3D model of the product, upon selection by the user.
Another object of the present disclosure is to provide users with the facility to move 3D
projection of the predetermined items inside the virtual hollow region in the 3D model of the
product. It is also an object of the present disclosure to provide users with the facility to
choose 3D images of items virtually from a predetermined list of 3D images of items to be
placed inside the virtual hollow region within the product. Yet another object of the present
disclosure is to disable selection of 3D projections of the predetermined items, once it is
detected that the virtual hollow region within the product is either full or an available volume
of the virtual hollow region within the product is less than a volume of one or more
predetermined items.
One aspect of the present disclosure relates to a method for automatically predicting a
holding capacity of a region within a product, wherein initially the method comprises of
receiving at a processing unit by an input unit, a 3‐dimensional (3D) image of the product and
a first set of 3D images, wherein the 3D image of the product comprises at least a 3D image
of the region and the first set of 3D images comprises a 3D image of each item from a set of items. Further, the method comprises of analyzing, by the processing unit, the 3D image of
the region and the 3D image of said each item from the set of items, wherein the 3D image
of the region is analyzed to detect an actual volume of the region, and the 3D image of said
each item from the set of items is analyzed to detect an actual volume of said each item from
the set of items. Further the method comprises of identifying, by an identification unit, a set
of target items from the set of items based on the actual volume of the region and the actual
volume of said each item from the set of items, wherein an actual volume of each target item
from the set of target items is lower than the actual volume of the region. The method further
comprises of identifying, by an identification unit, a second set of 3D images from the first set
of 3D images, wherein the second set of 3D images comprises a 3D image of said each target
item from the set of target items. Further the method comprises of receiving, by the input
unit via a user interface, a first user selection, wherein the first user selection comprises a first
user selected 3D image from second set of 3D images. The method further comprises of
analyzing, by the processing unit, the first user selected 3D image to detect an actual volume
of a target item corresponding to the first user selected 3D image. The method further
comprises of automatically placing, by the processing unit, the first user selected 3D image in
the 3D image of the region based on the actual volume of the target item corresponding to
the first user selected 3D image and the actual volume of the region. Further, the method
comprises of automatically arranging, by the processing unit, the first user selected 3D image
in the 3D image of the region based on a first user input. Further, the method comprises of
automatically predicting, by the processing unit, the holding capacity of the region within the
product based on at least one of the automatic placements of the first user selected 3D image
in the 3D image of the region and the automatic arrangement of the first user selected 3D
image in the 3D image of the region.
Another aspect of the present disclosure relates to a system for automatically predicting a
holding capacity of a region within a product, wherein the system comprising of an input unit,
connected to at least one of a processing unit and an identification unit, wherein the input
unit is configured to receive a 3‐dimensional (3D) image of the product and a first set of 3D
images, wherein the 3D image of the product comprises at least a 3D image of the region and
the first set of 3D images comprises a 3D image of each item from a set of items. Further the
system comprises of the processing unit, connected to at least one of the input unit and the dentification unit, wherein the processing unit is configured to analyze the 3D image of the
region and the 3D image of said each item from the set of items, wherein: the 3D image of
the region is analyzed to detect an actual volume of the region, and the 3D image of said each
item from the set of items is analyzed to detect an actual volume of said each item from the
set of items. Further the system comprises of the identification unit, connected to at least
one of the input unit and the processing unit, wherein the identification unit is configured to
identify a set of target items from the set of items based on the actual volume of the region
and the actual volume of said each item from the set of items, wherein an actual volume of
each target item from the set of target items is lower than the actual volume of the region.
Further the identification unit is configured to identify a second set of 3D images from the
first set of 3D images, wherein the second set of 3D images comprises a 3D image of said each
target item from the set of target items. The input unit is then configured to receive via a user
interface, a first user selection, wherein the first user selection comprises a first user selected
3D image from second set of 3D images. The processing unit is further configured to analyse
the first user selected 3D image to detect an actual volume of a target item corresponding to
the first user selected 3D image. Further the processing unit is configured to automatically
place the first user selected 3D image in the 3D image of the region based on the actual
volume of the target item corresponding to the first user selected 3D image and the actual
volume of the region. The processing unit is further configured to automatically arrange the
first user selected 3D image in the 3D image of the region based on a first user input. Further
the processing unit is configured to automatically predict the holding capacity of the region
within the product based on at least one of the automatic placements of the first user selected
3D image in the 3D image of the region and the automatic arrangement of the first user
selected 3D image in the 3D image of the region.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated herein, and constitute a part of this
disclosure, illustrate exemplary embodiments of the disclosed methods and systems in which
like reference numerals refer to the same parts throughout the different drawings.
Components in the drawings are not necessarily to scale, emphasis instead being placed upon
clearly illustrating the principles of the present disclosure. Also, the embodiments shown in the figures are not to be construed as limiting the disclosure, but the possible variants of the
method and system according to the disclosure are illustrated herein to highlight the
advantages of the disclosure. It will be appreciated by those skilled in the art that disclosure
of such drawings includes disclosure of electrical components or circuitry commonly used to
implement such components.
Figure 1 illustrates an exemplary architecture of a system [100], for automatically predicting
a holding capacity of a region within a product in accordance with exemplary embodiments
of the present disclosure.
Figure 2 illustrates an exemplary method flow diagram [200] for automatically predicting a
holding capacity of a region within a product in accordance with exemplary embodiments of
the present disclosure.
Figure 3 illustrates an exemplary use case in accordance with exemplary embodiments of the
present disclosure.
The foregoing shall be more apparent from the following more detailed description of the
embodiments of the disclosure.
DETAIL DESCRIPTION OF THE DISCLOSURE
In the following description, for the purposes of explanation, various specific details are set
forth to provide a thorough understanding of embodiments of the present disclosure. It will
be apparent, however, that embodiments of the present disclosure may be practiced without
these specific details. Several features described hereafter can each be used independently
of one another or with any combination of other features. An individual feature may not
address any of the problems discussed above or might address only some of the problems
discussed above.
The ensuing description provides exemplary embodiments only, and is not intended to limit
the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of
the exemplary embodiments will provide those skilled in the art with an enabling description
for implementing an exemplary embodiment. It should be understood that various changes may be made in the function and arrangement of elements without departing from the spirit
and scope of the disclosure as set forth.
Specific details are given in the following description to provide a thorough understanding of
the embodiments. However, it will be understood by one of ordinary skills in the art that the
embodiments may be practiced without these specific details. For example, circuits, systems,
processes, and other components may be shown as components in block diagram form in
order not to obscure the embodiments in unnecessary detail.
Also, it is noted that individual embodiments may be described as a process which is depicted
as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram.
Although a flowchart may describe the operations as a sequential process, many of the
operations can be performed in parallel or concurrently. In addition, the order of the
operations may be re‐arranged. A process is terminated when its operations are completed
but could have additional steps not included in a figure.
The word “exemplary” and/or “demonstrative” is used herein to mean serving as an example,
instance, or illustration. For the avoidance of doubt, the subject matter disclosed herein is not
limited by such examples. In addition, any aspect or design described herein as “exemplary”
and/or “demonstrative” is not necessarily to be construed as preferred or advantageous over
other aspects or designs, nor is it meant to preclude equivalent exemplary structures and
techniques known to those of ordinary skill in the art. Furthermore, to the extent that the
terms “includes,” “has,” “contains,” and other similar words are used in either the detailed
description or the claims, such terms are intended to be inclusive—in a manner similar to the
term “comprising” as an open transition word—without precluding any additional or other
elements.
As disclosed in the background section, the advancement in the telecom sector has opened
huge opportunity for e‐commerce platforms. Online shopping has brought market space to
the homes of the users and empowered them to make informed decisions while investing
their hard‐earned money by comparing different products from different brands with
fluctuating prices. It has further empowered users to take advantage of different sales on a
product while enjoying the comfort of their homes. Earlier users had to visit each store of a
brand which might be geographically located in a different market causing exhaustion or mission out on discounts due to lack of information, but all of this has been changed by online
shopping. Even with all these advantages, online shopping does not come without any
limitations. One of the limitations of online shopping that the present disclosure solves is that
of to provide users with a showroom like experience where user walks around the showroom
and observe all the products in real world.
The showroom like experience is nothing but providing an experience to the users in online
shopping at the comfort of their home which is like an offline shopping in a market. For
instance, if a user is looking to buy a scooter online, and is comparing different features,
discounts, and prices of scooters of different brands, one important factor that influences
decision making is boot space inside the scooter. Every user has different needs, and they
may be looking for a scooter with a specific boot space, like for a delivery company, they
might be looking for extra boot space and for someone looking to buy a scooter to commute
to office might be looking for a boot space of keeping a bag with laptop. But the existing
method is to provide boot space information in the description of the scooter by way of
mathematical metrics such as liters. Such information is not much useful for the user as they
are not always able to visualize how much liters of boot space will practically serve their
needs; the only inference users can draw is which scooter has a bigger boot space and which
has a smaller boot space. Specifically in the EV scooter race, usually batteries that run the
vehicle take up most to of the boot space and there is very little space left in the boot.
Therefore, it becomes critical for the user to understand the boot space of the vehicle before
making a purchase and even for the brands, it has become a selling point.
Similar situation arises when the users are trying to purchase a Refrigerator, although the
description describes all the features including volumes of the product, but the only
understanding the user will have is which refrigerator will hold more items, but this is not very
useful to the user. There may be a possibility that a medium size refrigerator will be good
enough for a family of 3‐4 members, but the user may not be confident to buy the same. The
same problem is faced by users while buying a washing machine, study table, cupboard or
any other product containing a hollow region within the product that can store one or more
items. Even if the measurement of such a hollow region is provided, it won’t give the user the
same experience as of actually seeing the same in a showroom. There are various added advantages of providing such an experience such as, if the experience
is same as the offline experience than the user will have more confidence in the product and
the chances are very few that the user may opt for return or replacement once the product
is delivered to the user. This not only reduces the hassle of the user and makes the experience
enriching, but also saves cost of shipment to the e‐commerce platform and make delivery
more efficient while also reducing impact on the environment by reducing shipments.
Therefore, it is pertinent to provide such an enriching experience to the users to install
confidence in the users with respect to the product and further increase revenue of e‐
commerce platforms.
One of the existing solutions of enriching an online shopping experience of a user is to provide
hotspot(s) on a 3D model/3D image of a product, which are just pointers on the top of the 3D
model, when the user taps the point on the 3D model, certain information is brought to the
user to show the functionality of that part of the product. Even this solution has same
limitation of being static and having predetermined information where user does not select
anything according to his needs. Further this solution does not solve the problem of giving a
fair idea to the user what things can fit inside a region (a hollow space) of the product and
how optimally the user can utilize that space.
Therefore, in view of these and other existing limitations, there is an imperative need to
provide a solution to overcome the limitations of prior existing solutions and to provide a
more efficient method and system for automatically predicting a holding capacity of a region
within a product using 3D images to provide a better experience and showroom like feel to
the user and the user can make a more informed comparison of different product. Therefore,
the user is in a better position and more likely to purchase the product.
In order to overcome these and other limitations of the prior known solutions, the present
disclosure discloses a method and a system for automatically predicting a holding capacity of
a region within a product. The method as disclosed in the present disclosure comprises
receiving at a processing unit by an input unit, a 3‐dimensional (3D) image of the product and
a first set of 3D images, wherein the 3D image of the product comprises at least a 3D image
of the region and the first set of 3D images comprises a 3D image of each item from a set of
items. The method further comprises of analyzing, by the processing unit, the 3D image of
the region and the 3D image of said each item from the set of items, wherein the 3D image of the region is analysed to detect an actual volume of the region, and the 3D image of said
each item from the set of items is analysed to detect an actual volume of said each item from
the set of items. Further, the method comprises of identifying, by an identification unit, a set
of target items from the set of items based on the actual volume of the region and the actual
volume of said each item from the set of items, wherein an actual volume of each target item
from the set of target items is lower than the actual volume of the region and identifying, by
an identification unit, a second set of 3D images from the first set of 3D images, wherein the
second set of 3D images comprises a 3D image of said each target item from the set of target
items. Further the method comprises of receiving, by the input unit via a user interface, a first
user selection, wherein the first user selection comprises a first user selected 3D image from
second set of 3D images. The method further comprises analysing, by the processing unit, the
first user selected 3D image to detect an actual volume of a target item corresponding to the
first user selected 3D image and automatically placing, by the processing unit, the first user
selected 3D image in the 3D image of the region based on the actual volume of the target
item corresponding to the first user selected 3D image and the actual volume of the region.
Further the method comprises of automatically arranging, by the processing unit, the first
user selected 3D image in the 3D image of the region based on a first user input and
automatically predicting, by the processing unit [104], the holding capacity of the region
within the product based on at least one of the automatic placement of the first user selected
3D image in the 3D image of the region and the automatic arrangement of the first user
selected 3D image in the 3D image of the region.
As used herein in this document a "processing unit”, "processor," or "operational processor"
includes one or more processors. A processor is any logic circuitry utilized to process
instructions. A processor could be an application‐specific integrated circuit (ASIC), a special‐
purpose integrated circuit (SPIC), a conventional processor, a digital signal processor, multiple
microprocessors, one or more microprocessors connected to a DSP core, a controller, a
microcontroller, any other kind of integrated circuit, etc. The processor may carry out
input/output processing, signal coding data processing, and/or any other functionality
necessary for the system to operate in accordance with the current disclosure. The processor
or processing unit is a hardware processor, to be more precise As used herein any electrical, electronic, and/or computing device or equipment that can
implement the features of the present disclosure may be referred to herein as "a user
equipment," "a user device," "a smart‐user‐device," "a smart device," "an electronic device,"
"a mobile device," “a cellular device,” "a handheld device," “a device”, ”a phone”, “a
smartphone”, “a cellular phone” or "a wireless communication device." The user
equipment/device may be any computing device that is able to implement the features of the
current disclosure, such as but not limited to a mobile phone, smart phone, laptop, desktop,
personal digital assistant, tablet computer, wearable device, or general‐purpose computer.
As used herein, an “input unit” refers to a machine or device including any mechanism to
receive a data as an input from one or more users of one or more user devices, wherein the
input may be received at the input unit in a form such as an audio, a video, a text, a gesture
and/or any other such form that may be obvious to a person skilled in the art. Keyboards,
mice, scanners, cameras, joysticks, and microphones are a few examples of input devices.
As used herein, a “display unit” refers to a machine or device including any mechanism to
display various information to users of a user device. Further, the display unit may refer to a
screen of the User device or User Equipment to show or present information to the user who
is associated with the User Equipment. Further, the display unit may include any other similar
unit obvious to a person skilled in the art, to implement the features of the present disclosure.
As used herein, a “user interface” is capable of displaying information to the user graphically
and record an input from the user via touch, keyboard etc., and pass this information to the
processing unit. User interface can be of a desktop computer or a laptop, handheld device
like smart phones or a TV monitor and a projector etc.
As used herein, “storage unit,” “cloud storage unit,” or “memory unit” refers to a machine or
computer‐readable medium including any mechanism for storing information in a form
readable by a computer or similar machine. For example, a computer‐readable medium
includes read‐only memory (“ROM”), random access memory (“RAM”), magnetic disk storage
media, optical storage media, flash memory devices or other types of machine‐accessible
storage media. The storage unit stores at least the data that may be required by one or more
units of the system/user device to perform their respective functions. Further the cloud
storage unit refers to a mode of computer data storage in which digital data is stored on
servers in off‐site locations.
The present disclosure is further explained in detail below with reference now to the diagrams
so that those skilled in the art can easily carry out the present disclosure.
Referring to Figure 1, an exemplary architecture of a system [100], for automatically
predicting a holding capacity of a region within a product, is shown in accordance with
exemplary embodiments of the present disclosure.
The system [100] comprises, at least one input unit [102], at least one processing unit [104],
at least one identification unit [106] and at least one storage unit [108]. All of these
components and units are assumed to be connected to each other unless otherwise indicated
below. Also, while only a few units are shown in Fig. 1, the system [100] may comprise
multiple such units or any such number of units as is obvious to a person skilled in the art or
as is required to implement the features of the present disclosure. Further, in an
implementation, the system [100] (partially or whole) may be present in one of a server
device connected to a user device to implement the features of the present disclosure. Also,
in, and implementation the system may partially present in the user device to implement the
features of the present disclosure.
The system [100] is configured for automatically predicting a holding capacity of a region
within a product with the help of the interconnection between its components/ units. The
system [100] as disclosed in the present disclosure encompasses the input unit [102] that is
configured to receive at a processing unit [104], a 3‐dimensional (3D) image of the product
and a first set of 3D images, wherein the 3D image of the product comprises at least a 3D
image of the region and the first set of 3D images comprises a 3D image of each item from a
set of items. In an implementation the input unit [102] is configured to receive two inputs in
form of 3D images, first for the product and second for one or more items from the set of
items that the user may wish to place inside the region within the product. Further the first
input is a 3D image projection of the product in 2D screen of the user device whereas the
second input is a first set of 3D image projections of the set of items. The set of items may be
related to the product, in terms of completing functionality of the region of the product. The
region within the product comprises of a hollow space inside the product which may have different functionality depending upon the product. For instance, a region may be a hollow
space of a product wherein the product comprises one of a vehicle such as a car or a scooter,
a refrigerator, a furniture such as a bed or a cupboard etc., a luggage such as suitcases, and a
washing machine etc. Further, in an instance the set of items may comprise at least one of
one or more grocery items such as a 5Kg rice bag, one or more utility items such as a helmet
or a phone charger, one or more sports items such as a basketball, and one or more clothing
items such as a gym kit etc. The first set of 3D images of the set of items may be received
based on the 3D image of the product and may be different for different products, therefore
the first set of 3D images of the set of items are related to a particular type of product.
Further the processing unit [104] is configured to analyze the 3D image of the region and the
3D image of said each item from the set of items, wherein the 3D image of the region is
analyzed to detect an actual volume of the region and the 3D image of said each item from
the set of items is analyzed to detect an actual volume of said each item from the set of items.
Here the processing unit [104] is analyzing the 3D image of the region to detect the actual
volume of the region of the product and represent the same in a 3D model. The processing
unit [104] analyses the 3D image of said each item from the set of items, detects the actual
volume of said each item and represents the same in a 3D model. For example, if a region is
a boot space of a scooter and a set of items comprises a bottle and a helmet. In this example
the processing unit [104] analyses a 3D image of the boot space to detect an actual volume
of the boot space, and the processing unit [104] analyses a 3D image of the helmet and 3D
image of the bottle to detect an actual volume of the helmet and the bottle.
Further the identification unit [106] is configured to identify a set of target items from the set
of items based on the actual volume of the region and the actual volume of said each item
from the set of items, wherein an actual volume of each target item from the set of target
items is lower than the actual volume of the region. To identify the set of target items the
identification unit [106] compares a volume of each item from the set of items indicated in
the first set of 3D images with a volume of the region and there are three possible outcomes.
First, a volume of one or more items from the set of items is larger than the volume of the
region, in this case the identification unit [106] proceeds further by discarding said one or
more items for the consideration of the user, for instance boot space of 5 liter of a scooter
may be too small of 10Kg flour packet, however 1 Kg flour packet can be easily placed in the same Therefore in this case the 10Kg flour packet is discarded and not identified as a target
item, but the 1 Kg flour packet is identified as a target item. Second, the volume of the one or
more items is equal to the volume of the region, in the case the identification unit [106]
proceeds further by identifying said one or more items as one or more target items such as a
pack of ice trays that may be packed inside a freezing area of a refrigerator may be identified
as a target item. Thirdly, the volume of the one or more items is less that the volume of the
region, in this case the identification unit [106] proceeds further by identifying the one or
more items as one or more target items such as a blanket to be placed in a cupboard may be
identified as a target item. The one or more target items can fit inside the region of the
product either singularly or in a combination, such as a helmet and a phone charger may fit
inside the boot of a scooter in a combination.
The identification unit [106] is further configured to identify a second set of 3D images from
the first set of 3D images, wherein the second set of 3D images comprises a 3D image of said
each target item from the set of target items. The identification unit [106] creates the second
set of 3D images and moves all 3D images of target items identified from the first set of 3D
images to the second set of 3D images. All the 3D images of the target items from the second
set of 3D images are eligible to be placed inside the region as their volume is either less than
or equal to the volume of the region. All volumes indicated in 3D models, whether of the
product or the region within the product or of the set of items, all corresponds to their actual
volumes in the real world.
The input unit [102] is further configured to receive via a user interface, a first user selection,
wherein the first user selection comprises a first user selected 3D image from second set of
3D images. The user may select any 3D image of the target items from the second set of 3D
images, which the users wish to see to be placed inside the region. This allows a personalized
online shopping experience as the user can select an item which he may use in his daily life
and wants to see if that particular item fits inside the region. For instance, if a person is buying
a scooter for daily commute to office, the user may like to see if the boot space is enough to
keep a laptop bag or to keep helmet inside the region. Further if the user wants to use it to
deliver grocery items, the user may want to see how 5Kg rice bag or 10Kg sugar fits inside the
region. This feature provides users an added advantage on online shopping as compared to
offline visiting the store, as these items might not be readily available at the store and therefore the user may not be able to place these items in real time in the boot space to see
if certain items fit inside the region.
The processing unit [104] is further configured to analyse the first user selected 3D image to
detect an actual volume of a target item corresponding to the first user selected 3D image.
The processing unit [104] analyses the first selection of the user, detects and monitor its
volume before placing the same inside the region. The reason it is done so that the processing
unit [104], at all times be aware of the volume placed inside the region, the available volume
inside the region and based on this information the processing unit [104] keeps updating a
set of 3D images of items to inform users what other 3D images of the items the user may
choose to place inside the region.
The processing unit [104] is further configured to automatically place the first user selected
3D image in the 3D image of the region based on the actual volume of the target item
corresponding to the first user selected 3D image and the actual volume of the region. The
processing unit [104] is then configured to automatically arrange the first user selected 3D
image in the 3D image of the region based on a first user input. Therefore, in an
implementation once a 3D image of an item is placed inside a 3D image of a region of a
product based on a user selection of the item, the processing unit [104] allows the user to
freely move the 3D image of the item inside the 3D image of the region. This corresponds to
real world activity of the user, while using the product in their day‐to‐day life, the user may
try to fit the item inside the product in different ways to achieve optimal use of a space inside
the product. This step takes the real‐world activity and applies it to a 3D model and gives the
users great freedom to move around the item inside the region in 3D space. Therefore, after
detecting the volume based on a 3D image of an item, the processing unit [104] automatically
places the 3D image of the item inside the region, and thus the user can see in a 3D model
how a chosen item may look if kept in the real world inside the region of the product. The
user may have a realistic look at the available space in the region once the item is kept inside
the region. Further, the user can compare and decide based on whether the daily use of an
item of the user is comfortably placed inside the region of the product or not. This reduces
the hassle of the user to go to an actual showroom to analyze the region of the product.
Further to automatically place the first user selected 3D image in the 3D image of the region,
the processing unit [104] is further configured to automatically update the second set of 3D images into a third set of 3D images, wherein a target item corresponding to each 3D image
in the third set of 3D images is associated with an actual volume that is lower than a target
volume. The target volume is a combination of the actual volume of the target item
corresponding to the first user selected 3D image and the actual volume of the region. For
instance, if a boot of a scooter has a volume 12 litre and can fit helmet (volume 3 kg), phone
charger (volume 0.1 Kg), laptop bag (volume 5 kg) and a sugar packet (volume 5 kg), and the
user selects a 3D image of said helmet. The 3D image of said helmet gets placed inside a 3D
image of said boot and the 3D image of said helmet may also be arranged inside the 3D image
of said boot based on a user input. Further, the processing unit [104] then compares the
volumes of each item again with an updated available volume of the boot. In case the volume
of the sugar packet is greater than an available volume of the boot, the processing unit [104]
creates a third set of 3D images containing the items which has lesser volume than the
available volume of the boot such as a phone charger and laptop bag etc.
Further the processing unit [104] is configured to automatically predict the holding capacity
of the region within the product based on at least one of the automatic placements of the
first user selected 3D image in the 3D image of the region and the automatic arrangement of
the first user selected 3D image in the 3D image of the region. For example, if in a 3D image
of a boot space of a car, a 3D image of a bag is placed based on a user selection of such 3D
image of the bag and if the 3D image of the bag is then moved to a new position in the 3D
image of the boot space based on a user input. The processing unit [104] in such case
automatically predicts the holding capacity of the boot space based on the placement and/or
movement of the 3D image of the bag inside the 3D image of the boot space.
Further, the input unit [102] is further configured to receive via the user interface, a second
user selection, wherein the second user selection comprises a second user selected 3D image
from third set of 3D images. The processing unit [104] then analyses the second user selected
3D image to detect an actual volume of a target item corresponding to the second user
selected 3D image. The processing unit [104] is then configured to automatically place the
second user selected 3D image in the 3D image of the region based on the actual volume of
the target item corresponding to the second user selected 3D image and the target volume.
The target volume is the combination of the actual volume of the target item corresponding
to the first user selected 3D image and the actual volume of the region. The second user
selected 3D image is automatically placed in the 3D image of the region along with the first
user selected 3D image. Further the processing unit [104] is configured to automatically
arrange the second user selected 3D image in the 3D image of the region based on a second
user input. Thereafter the processing unit [104] is configured to automatically predict the
holding capacity of the region within the product based on at least one of the automatic
placement of the second user selected 3D image in the 3D image of the region, the automatic
arrangement of the second user selected 3D image in the 3D image of the region, the
automatic placement of the first user selected 3D image in the 3D image of the region, and
the automatic arrangement of the first user selected 3D image in the 3D image of the region.
The user may remove certain 3D images of the items from the region or replace it with some
other item’s 3D images.
Hence, the system gives the user realistic feel of the region within the product and the user
can be confident of how the space of the region can be utilized. The user may compare regions
of different products and analyse what type of items may fit in these regions. Therefore, the
user is more empowered to take informed decisions. Also, in an implementation the system
may recommend relevant objects to place within the region based on additional attributes
that can be taken from the user. For example, getting a family size to recommend a number
of products for a family of 2, 3 or 4 etc.
Referring to Figure 2, an exemplary method flow diagram [200], for automatically predicting
a holding capacity of a region within a product, is shown in accordance with exemplary
embodiments of the present disclosure. In an implementation, the method [200] is performed
by the system [100]. Also in Figure 2, the method starts at step [202].
At step [204], the method comprises of receiving, at a processing unit [104] by an input unit
[102], a 3‐dimensional (3D) image of the product and a first set of 3D images, wherein the 3D
image of the product comprises at least a 3D image of the region and the first set of 3D images
comprises a 3D image of each item from a set of items. In an implementation the input unit
[102] receives two inputs in form of 3D images, first for the product and second for one or
more items from the set of items that the user may wish to place inside the region within the
product. Further the first input is a 3D image projection of the product in 2D screen of the
user device whereas the second input is a first set of 3D image projections of the set of items.
The set of items may be related to the product, in terms of completing functionality of the region of the product. The region within the product comprises of a hollow space inside the
product which may have different functionality depending upon the product. For instance, a
region may be a hallow space of a product wherein the product comprises one of a vehicle
such as a car or a scooter, a refrigerator, a furniture such as a bed or a cupboard etc., a luggage
such as suitcases and a washing machine etc. Further, in an instance the set of items may
comprise at least one of one or more grocery items such as a 5Kg rice bag, one or more utility
items such as a helmet or a phone charger, one or more sports items such as a basketball,
and one or more clothing items such as a gym kit etc. The first set of 3D images of the set of
items may be received based on the 3D image of the product and may be different for
different products, therefore the first set of 3D images of the set of items are related to a
particular type of product.
At step [206], the method comprises of analysing, by the processing unit [104], the 3D image
of the region and the 3D image of said each item from the set of items, wherein the 3D image
of the region is analysed to detect an actual volume of the region and the 3D image of said
each item from the set of items is analysed to detect an actual volume of said each item from
the set of items. Here at this step, the processing unit [104] is analysing the 3D image of the
region to detect the actual volume of the region of the product and represent the same in a
3D model. The processing unit [104] analyses the 3D image of said each item from the set of
items, detects the actual volume of said each item and represent the same in a 3D model. For
example, if a region is a boot space of a scooter and a set of items comprises a bottle and a
helmet. In this example the processing unit [104] analyses a 3D image of the boot space to
detect an actual volume of the boot space, and the processing unit [104] analyses a 3D image
of the helmet and 3D image of the bottle to detect an actual volume of the helmet and the
bottle.
At step [208], the method comprises of identifying, by an identification unit [106], a set of
target items from the set of items based on the actual volume of the region and the actual
volume of said each item from the set of items, wherein an actual volume of each target item
from the set of target items is lower than the actual volume of the region.
To identify the set of target items the identification unit [106] at step [208] compares a
volume of each item from the set of items indicated in the first set of 3D images with a volume
of the region and there are three possible outcomes. First, a volume of one or more items from the set of items is larger than the volume of the region, in this case the identification
unit [106] proceeds further by discarding said one or more items for the consideration of the
user, for instance boot space of 5 litre of a scooter may be too small of 10Kg flour packet,
however 1 Kg flour packet can be easily placed in the same. Therefore, in this case the 10Kg
flour packet is discarded and not identified as a target item, but the 1 Kg flour packet is
identified as a target item. Second, the volume of the one or more items is equal to the
volume of the region, in the case the identification unit [106] proceeds further by identifying
said one or more items as one or more target items such as a pack of ice trays that may be
packed inside a freezing area of a refrigerator may be identified as a target item. Thirdly, the
volume of the one or more items is less that the volume of the region, in this case the
identification unit [106] proceeds further by identifying the one or more items as one or more
target items such as a blanket to be placed in a cupboard may be identified as a target item.
The one or more target items can fit inside the region of the product either singularly or in a
combination, such as a helmet and a phone charger may fit inside the boot of a scooter in a
combination.
At step [210], the method comprises of identifying, by an identification unit [106], a second
set of 3D images from the first set of 3D images, wherein the second set of 3D images
comprises a 3D image of said each target item from the set of target items. The identification
unit [106] at step [210] creates the second set of 3D images and moves all 3D images of target
items identified from the first set of 3D images to the second set of 3D images. All the 3D
images of target items from the second set of 3D images are eligible to be placed inside the
region as their volume is either less than or equal to the volume of the region. All volumes
indicated in 3D models, whether of the product or the region within the product or of the set
of items, all corresponds to their actual volumes in the real world.
At step [212], the method comprises of receiving, by the input unit [102] via a user interface,
a first user selection, wherein the first user selection comprises a first user selected 3D image
from second set of 3D images. A user at step [212], the user may select any 3D image of the
target items from the second set of 3D images, which the users wish to see to be placed inside
the region. This allows a personalized online shopping experience as the user can select an
item which he may use it in his daily life and wants to see if that particular item fits inside the
region. For instance, if a person is buying a scooter for daily commute to office, the user may ike to see if the boot space is enough to keep a laptop bag or to keep helmet inside the region.
Further if the user wants to use it to deliver grocery items, the user may want to see how 5Kg
rice bag or 10Kg sugar fits inside the region. This feature provides users an added advantage
on online shopping as compared to offline visiting the store, as these items might not be
readily available at the store and therefore the user may not be able to place these items in
real time in the boot space to see if certain items fit inside the region.
At step [214], the method comprises of analysing, by the processing unit [104], the first user
selected 3D image to detect an actual volume of a target item corresponding to the first user
selected 3D image. The processing unit [104] analyses the first selection of the user, detects
and monitor its volume before placing the same inside the region. The reason it is done so
that the processing unit [104], at all times be aware of the volume placed inside the region,
the available volume inside the region and based on this information the processing unit [104]
keeps updating a set of 3D images of items to inform users what other 3D images of the items
the user may choose to place inside the region.
At step [216], the method comprises of automatically placing, by the processing unit [104],
the first user selected 3D image in the 3D image of the region based on the actual volume of
the target item corresponding to the first user selected 3D image and the actual volume of
the region. Therefore, after detecting the volume based on a 3D image of an item, the
processing unit [104] automatically places the3D image of the item inside the region, and thus
the user can see in a 3D model how a chosen item may look if kept in the real world inside
the region of the product. The user may have a realistic look at the available space in the
region once the item is kept inside the region. Further, the user can compare and decide
based on whether the daily use of an item of the user is comfortably placed inside the region
of the product or not. This reduces the hassle of the user to go to an actual showroom to
analyze the region of the product. Further to automatically place the first user selected 3D
image in the 3D image of the region, the processing unit [104] further automatically updates
the second set of 3D images into a third set of 3D images, wherein a target item corresponding
to each 3D image in the third set of 3D images is associated with an actual volume that is
lower than a target volume. The target volume is a combination of the actual volume of the
target item corresponding to the first user selected 3D image and the actual volume of the
region. For instance, if a boot of a scooter has a volume 12 litres and can fit helmet (volume
22
3 kg), phone charger (volume 0.1 kg), laptop bag (volume 5 kg) and a sugar packet (volume
5kg), and the user selects a 3D image of said helmet. The 3D image of said helmet gets placed
inside a 3D image of said boot and the 3D image of said helmet may also be arranged inside
the 3D image of said boot based on a user input. Further, the processing unit [104] compares
the volumes of each item again with an updated available volume of the boot. In case the
volume of the sugar packet is greater than an available volume of the boot, the processing
unit [104] creates a third set of 3D images containing the items which has lesser volume than
the available volume of the boot such as a phone charger and laptop bag etc.
Next, at step [218], the method comprises of automatically arranging, by the processing unit
[104], the first user selected 3D image in the 3D image of the region based on a first user
input. Therefore, once a 3D image of an item is placed inside a 3D image of a region of a
product, the method allows user to freely move the 3D image of the item inside the 3D image
of the region. This corresponds to real world activity of the user, while using the product in
their day‐to‐day life, the user may try to fit the item inside the product in different ways to
achieve optimal use of a space inside the product. This step takes the real‐world activity and
applies it to a 3D model and gives the users great freedom to move around the item inside
the region in 3D space.
Further, at step [220], the method comprises of automatically predicting, by the processing
unit [104], the holding capacity of the region within the product based on at least one of the
automatic placements of the first user selected 3D image in the 3D image of the region and
the automatic arrangement of the first user selected 3D image in the 3D image of the region.
For example, if in a 3D image of a boot space of a car, a 3D image of a bag is placed based on
a user selection of such 3D image of the bag and if the 3D image of the bag is then moved to
a new position in the 3D image of the boot space based on a user input. The processing unit
[104] in such case automatically predicts the holding capacity of the boot space based on the
placement and/or movement of the 3D image of the bag inside the 3D image of the boot
space.
Further, the method receives, by the input unit [102] via the user interface, a second user
selection, wherein the second user selection comprises a second user selected 3D image from
third set of 3D images. The processing unit [104] then analyses, the second user selected 3D
image to detect an actual volume of a target item corresponding to the second user selected
23
3D image. Further, the method comprises of automatically placing, by the processing unit
[104], the second user selected 3D image in the 3D image of the region based on the actual
volume of the target item corresponding to the second user selected 3D image and the target
volume. The target volume is the combination of the actual volume of the target item
corresponding to the first user selected 3D image and the actual volume of the region. The
second user selected 3D image is automatically placed in the 3D image of the region along
with the first user selected 3D image. Further the method comprises of automatically
arranging, by the processing unit [104], the second user selected 3D image in the 3D image of
the region based on a second user input and automatically predicting, by the processing unit
[104], the holding capacity of the region within the product based on at least one of the
automatic placement of the second user selected 3D image in the 3D image of the region, the
automatic arrangement of the second user selected 3D image in the 3D image of the region,
the automatic placement of the first user selected 3D image in the 3D image of the region,
and the automatic arrangement of the first user selected 3D image in the 3D image of the
region. The method may be repeated several times before an item list is exhausted or an
updated available volume of the region is less than a volume of each available item in the
item list. The user may remove certain 3D images of the items from the region or replace it
with some other item’s 3D images.
Hence, the method gives the user realistic feel of the region within the product and the user
can be confident of how the space of the region can be utilized. The user may compare regions
of different products and analyse what type of items may fit in these regions. Therefore, the
user is more empowered to take informed decisions. Also, in an implementation the method
may recommend relevant objects to place within the region based on additional attributes
that can be taken from the user. For example, getting a family size to recommend a number
of products for a family of 2, 3 or 4 etc.
Figure 3 illustrates an exemplary use case in accordance with exemplary embodiments of the
present disclosure.
At [302] in Figure 3, a scooter is displayed on an exemplary user interface of a user device
with a description of the scooter for a user. The user may click on a hotspot placed on a seat f the scooter in order to know more details of a boot space of the scooter. After this the seat
of the scooter gets zoomed into and the boot opens and a pop‐up with further details of the
boot space with an option for the user to “See what fits” is provided.
At [304], further on clicking on the “See what fits” button, the user gets options of a set of
items of standard size such as helmet, backpack which the user may drag and drop to see how
much space these set of items occupy in the boot and visualize the boot space better. Here
both the boot (or entire scooter) and the set of items are a 3D model which the users may
rotate or zoom to see from all angles.
At [306], once the user drags and drops an item like a helmet as shown in this case, the helmet
may be placed in the boot and the users may further rotate and/or zoom the scooter, the
boot space and/or the helmet to check the space occupied from all angles as per his wish to
get an idea of the boot space.
The present disclosure provides for many advantages as compared to the existing technology.
One technical advantage is that in a 3D space the user can place item of their own choice
inside the region to see what items fits and what items don’t. Another technical advantage is
that in a 3D space the user is free to place either a single item or a combination of items inside
the region within the product. The user is also at liberty to move the items inside the region
freely and decides the optimum use of the space in a 3D space.
While considerable emphasis has been placed herein on the disclosed embodiments, it will
be appreciated that many embodiments can be made and that many changes can be made
to the embodiments without departing from the principles of the present disclosure. These
and other changes in the embodiments of the present disclosure will be apparent to those
skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be
implemented is illustrative and non‐limiting. I/ We claim:
1. A method for automatically predicting a holding capacity of a region within a product,
the method comprising:
‐ receiving, at a processing unit [104] by an input unit [102], a 3‐dimensional
(3D) image of the product and a first set of 3D images, wherein the 3D
image of the product comprises at least a 3D image of the region and the
first set of 3D images comprises a 3D image of each item from a set of
items;
‐ analysing, by the processing unit [104], the 3D image of the region and the
3D image of said each item from the set of items, wherein:
the 3D image of the region is analysed to detect an actual volume of
the region, and
the 3D image of said each item from the set of items is analysed to
detect an actual volume of said each item from the set of items;
‐ identifying, by an identification unit [106], a set of target items from the
set of items based on the actual volume of the region and the actual
volume of said each item from the set of items, wherein an actual volume
of each target item from the set of target items is lower than the actual
volume of the region;
‐ identifying, by the identification unit [106], a second set of 3D images from
the first set of 3D images, wherein the second set of 3D images comprises
a 3D image of said each target item from the set of target items;
‐ receiving, by the input unit [102] via a user interface, a first user selection,
wherein the first user selection comprises a first user selected 3D image
from the second set of 3D images;
‐ analysing, by the processing unit [104], the first user selected 3D image to
detect an actual volume of a target item corresponding to the first user
selected 3D image;
‐ automatically placing, by the processing unit [104], the first user selected
3D image in the 3D image of the region based on the actual volume of the arget item corresponding to the first user selected 3D image and the
actual volume of the region;
‐ automatically arranging, by the processing unit [104], the first user
selected 3D image in the 3D image of the region based on a first user input;
and
‐ automatically predicting, by the processing unit [104], the holding capacity
of the region within the product based on at least one of the automatic
placement of the first user selected 3D image in the 3D image of the region
and the automatic arrangement of the first user selected 3D image in the
3D image of the region.
2. The method as claimed in claim1, wherein the region comprises of a hollow space.
3. The method as claimed in claim 1, wherein the set of items comprises at least one of
one or more grocery items, one or more utility items, one or more sports items and
one or more clothing items.
4. The method as claimed in claim 1, wherein the product comprises one of a vehicle, a
refrigerator, a furniture, a luggage and a washing machine.
5. The method as claimed in claim 1, wherein the automatically placing, by the
processing unit [104], the first user selected 3D image in the 3D image of the region
further comprises automatically updating the second set of 3D images into a third set
of 3D images, wherein a target item corresponding to each 3D image in the third set
of 3D images is associated with an actual volume that is lower than a target volume.
6. The method as claimed in claim 5, wherein the target volume is a combination of the
actual volume of the target item corresponding to the first user selected 3D image and
the actual volume of the region.
7. The method as claimed in claim 5, the method further comprising:
27
‐ receiving, by the input unit [102] via the user interface, a second user
selection, wherein the second user selection comprises a second user
selected 3D image from third set of 3D images,
‐ analysing, by the processing unit [104], the second user selected 3D image
to detect an actual volume of a target item corresponding to the second
user selected 3D image;
‐ automatically placing, by the processing unit [104], the second user
selected 3D image in the 3D image of the region based on the actual
volume of the target item corresponding to the second user selected 3D
image and the target volume;
‐ automatically arranging, by the processing unit [104], the second user
selected 3D image in the 3D image of the region based on a second user
input; and
‐ automatically predicting, by the processing unit [104], the holding capacity
of the region within the product based on at least one of the automatic
placement of the second user selected 3D image in the 3D image of the
region, the automatic arrangement of the second user selected 3D image
in the 3D image of the region, the automatic placement of the first user
selected 3D image in the 3D image of the region, and the automatic
arrangement of the first user selected 3D image in the 3D image of the
region.
8. A system for automatically predicting a holding capacity of a region within a product,
the system comprising:
‐ an input unit [102], configured to receive, at a processing unit [104], a 3‐
dimensional (3D) image of the product and a first set of 3D images, wherein
the 3D image of the product comprises at least a 3D image of the region
and the first set of 3D images comprises a 3D image of each item from a
set of items;
‐ the processing unit [104], configured to:
analyse the 3D image of the region and the 3D image of said each item
from the set of items, wherein:
28
the 3D image of the region is analysed to detect an actual
volume of the region, and
the 3D image of said each item from the set of items is analysed
to detect an actual volume of said each item from the set of
items; and
‐ an identification unit [106], configured to:
identify a set of target items from the set of items based on the actual
volume of the region and the actual volume of said each item from the
set of items, wherein an actual volume of each target item from the set
of target items is lower than the actual volume of the region, and
identify a second set of 3D images from the first set of 3D images,
wherein the second set of 3D images comprises a 3D image of said each
target item from the set of target items, wherein:
the input unit [102] is further configured to receive via a user
interface, a first user selection, wherein the first user selection
comprises a first user selected 3D image from the second set of
3D images, and
the processing unit [104] is further configured to:
analyse the first user selected 3D image to detect an
actual volume of a target item corresponding to the first
user selected 3D image,
automatically place the first user selected 3D image in
the 3D image of the region based on the actual volume
of the target item corresponding to the first user
selected 3D image and the actual volume of the region,
automatically arrange the first user selected 3D image in
the 3D image of the region based on a first user input,
and
automatically predict the holding capacity of the region
within the product based on at least one of the
automatic placement of the first user selected 3D image
in the 3D image of the region and the automatic arrangement of the first user selected 3D image in the
3D image of the region.
9. The system as claimed in claim 8, wherein the region comprises of a hollow space.
10. The system as claimed in claim 8, wherein the set of items comprises at least one of
one or more grocery items, one or more utility items, one or more sports items and
one or more clothing items.
11. The system as claimed in claim 8, wherein the product comprises one of a vehicle, a
refrigerator, a furniture, a luggage and a washing machine.
12. The system as claimed in claim 8, wherein to automatically place the first user selected
3D image in the 3D image of the region , the processing unit [104] is further configured
to automatically update the second set of 3D images into a third set of 3D images,
wherein a target item corresponding to each 3D image in the third set of 3D images is
associated with an actual volume that is lower than a target volume.
13. The system as claimed in claim 12, wherein the target volume is a combination of the
actual volume of the target item corresponding to the first user selected 3D image and
the actual volume of the region.
14. The system as claimed in claim 12, wherein the input unit [102] is further configured
to receive via the user interface, a second user selection, wherein the second user
selection comprises a second user selected 3D image from third set of 3D images, and
wherein the processing unit [104] is further configured to:
analyse the second user selected 3D image to detect an actual volume
of a target item corresponding to the second user selected 3D image,
automatically place the second user selected 3D image in the 3D image
of the region based on the actual volume of the target item
corresponding to the second user selected 3D image and the target
volume,
30
automatically arrange the second user selected 3D image in the 3D
image of the region based on a second user input, and
automatically predict the holding capacity of the region within the
product based on at least one of the automatic placement of the
second user selected 3D image in the 3D image of the region, the
automatic arrangement of the second user selected 3D image in the 3D
image of the region, the automatic placement of the first user selected
3D image in the 3D image of the region, and the automatic
arrangement of the first user selected 3D image in the 3D image of the
region.
Dated this the 5th day of October, 2023
| # | Name | Date |
|---|---|---|
| 1 | 202341066897-STATEMENT OF UNDERTAKING (FORM 3) [05-10-2023(online)].pdf | 2023-10-05 |
| 2 | 202341066897-REQUEST FOR EXAMINATION (FORM-18) [05-10-2023(online)].pdf | 2023-10-05 |
| 3 | 202341066897-PROOF OF RIGHT [05-10-2023(online)].pdf | 2023-10-05 |
| 4 | 202341066897-POWER OF AUTHORITY [05-10-2023(online)].pdf | 2023-10-05 |
| 5 | 202341066897-FORM 18 [05-10-2023(online)].pdf | 2023-10-05 |
| 6 | 202341066897-FORM 1 [05-10-2023(online)].pdf | 2023-10-05 |
| 7 | 202341066897-FIGURE OF ABSTRACT [05-10-2023(online)].pdf | 2023-10-05 |
| 8 | 202341066897-DRAWINGS [05-10-2023(online)].pdf | 2023-10-05 |
| 9 | 202341066897-DECLARATION OF INVENTORSHIP (FORM 5) [05-10-2023(online)].pdf | 2023-10-05 |
| 10 | 202341066897-COMPLETE SPECIFICATION [05-10-2023(online)].pdf | 2023-10-05 |