Abstract: A dimension measurement system (100) includes a conveyor unit (102) to convey one or more objects from one end to the other end of a conveyor (104), an image capturing unit (106) to capture a video of a predetermined section of the conveyor, and a video processing unit (108) including a video processing module (110) to fragment the video at a predetermined frame rate and process each frame to identify a reference window of a predetermined location and dimension, and a dimension measurement module (112) to identify one or more references window having a segment of the object, to obtain a plurality of segments for the object. For each of the plurality of segments of the object, a corresponding segment width is determined based on a segment top edge width and a segment bottom edge width. Subsequently, an average width of the object is obtained for each of the object. <>
TECHNICAL FIELD
[0001] The present subject matter relates, in general, to dimension
measuring systems, and in particular dimension measuring systems for irregular objects.
BACKGROUND
[0002] Measuring dimensions of objects, raw material or in finished
form, is an essential part of industrial process. For instance, dimensions of object, such as raw material may be measured for quantitative and qualitative purposes, particularly in agriculture and food-processing applications as quality of raw material is very essential in industries for ensuring a good quality of product. Further, dimensions of finished goods may be measured to ensure compliance with industry standards and customer requirements. Objects, such as raw material and/or finished goods traditionally have been measured by hand with tape measures, rulers, calipers, or other measurement tools. However, to conserve time and manual labor, the dimensions of objects are measured using dimension measuring systems.
BRIEF DESCRIPTION OF DRAWINGS
[0003] The detailed description is described with reference to the
accompanying figures. It should be noted that the description and figures are merely examples of the present subject matter and are not meant to represent the subject matter itself.
[0004] Figure 1 illustrates a dimension measurement system,
according to an example implementation of the present subject matter.
[0005] Figure 2 illustrates a computing environment having a
dimension measurement system, according to another example implementation of the present subject matter.
[0006] Figures 3A-3D illustrate different stages of operation of the
dimension measurement system, according to an example implementation of the present subject matter.
[0007] Figure 4 illustrates a method for measuring dimension of one
or more objects, according to an example implementation of the present subject matter.
[0008] Throughout the drawings, identical reference numbers
designate similar, but not necessarily identical, elements. The figures are not necessarily to scale, and the size of some parts may be exaggerated to more clearly illustrate the example shown. Moreover, the drawings provide examples and/or implementations consistent with the description; however, the description is not limited to the examples and/or implementations provided in the drawings.
DETAILED DESCRIPTION
[0009] In industries, dimension measurement of objects, such as
raw material and/or finished goods may be vital to assess the quality of the object and ensure the quality of the final product. For instance, in pulp making industries, large diameter wood logs may be preferred for facilitating production of higher pulp from same wood quantity. Further, in rice industries, determination of the texture and shape of rice grain is an essential process to define optical quality of rice, wheat or coffee beans etc. Furthermore, in tobacco industries, the shape, size and color of the tobacco leaves determine the quality of tobacco mix. Similarly, other industries may require measurement of the finished goods to ensure that the finished goods meet industry standards. In one conventional technique, the dimensions of the objects are measured when the objects are being transported to/from storage facilities, using conveyor systems.
[0010] The conveyor systems usually include a set of cameras,
infrared transmitters, and transportation conveyor speed sensors. The
camera collects sequential images objects being transported over the conveyor to detect the coordinates of each point of the object. The coordinates are then interpolated at regular intervals to describe the cross-section (top view) of the object.
[0011] However, measurements made using the conventional
technique are not very accurate and fail to determine dimensions of irregular objects, i.e., where the objects have non-uniform width, such as for wood logs, cashew nuts, rice, tobacco leaves, and wheat grains. The use of multiple cameras, laser transmitters and multiple sensors also makes the system complex and costly. Further, such techniques usually measure dimensions of a single object. For instance, if there were multiple objects placed on the conveyor, an operator of the conveyor has to manually sort these multiple objects in a way that only a single object is processed at a given time. Thus, dimension for multiple objects cannot be measured at a same time. Further, the conventional technique may be used only with a particular conveyor system, making the conventional technique less flexible with other conveyor systems. Furthermore, in certain industries, objects may be placed in a transverse direction with respect to the conveyor, thereby limiting the maximum length of an object that can be measured by the width of the conveyor.
[0012] The present subject matter discloses example
implementations of a dimension measurement system. In an example implementation of the present subject matter, the dimension measurement system is to capture a video of a section of a conveyor having at least one object. Once the video is captured, the video of the section of the conveyor may be segmented to obtain a plurality of frames. Subsequently, each of the plurality of frames is processed to identify a reference window of predetermined width within each frame. Each reference window may contain a segment of the captured object. Further, one or more reference windows are identified having a segment of the captured object. For each
of the plurality of segments for the object, a corresponding segment width based on a segment top edge width and a segment bottom edge width is determined. The segment top edge width is the width of a top edge of the segment within the reference window and the segment bottom edge is the width of a bottom edge of the segment within the reference window. Based on the segment width obtained for each segment of the object, an average width of the object is determined.
[0013] In one example implementation of the present subject
matter, to measure the dimension of one or more objects, an operator may place the one or more objects in a conveyor unit of the dimension measurement device. An image capturing unit may then capture a video of a predetermined section of the conveyor such that the one or more objects passes through the section of the conveyor when the video is being captured. Further, the video of the section of the conveyor is segmented, at a predetermined frame rate, to obtain a plurality of frames.
[0014] Subsequently, a dimension measurement module processes
each reference window to identify one or more reference windows having
a segment of the one or more objects, to obtain a plurality of segments for
the one or more objects. In one example, the dimension measurement
module initially processes each reference window to detect two or more
edge lines within the reference window, using an edge detection algorithm.
The detected edge lines are probable edges of the segments of the one or
more objects. Based on the edge lines, one or more segments in the
reference window are identified. Subsequently, the dimension
measurement module processes each reference window to identify one or more reference windows having segments of the same object.
[0015] In one example, the dimension measurement module
compares the bottom edges of each of the one or more segment within a reference window with the top edges of one or more segments within a consecutive reference window to identify the one or more reference
windows having segments corresponding to same object. In an example implementation, the two consecutive reference windows refer to reference windows obtained from two sequentially consecutive frames in accordance to a sequence the frames captured in the video. Further, for each segment of the same object, the segment width is determined. Subsequently, the segment width is determined as an average of the segment width of the multiple segments. Further, for each segment, a segment length is determined as a distance between the segment top edge and the segment bottom edge within the reference window incorporating the segment. Subsequently, an object length is determined, for each object, as a sum of segment length obtained for each segment of the object.
[0016] The present subject matter thus facilitates in measuring
dimensions of one or more objects placed on a conveyor. As the objects are placed on the conveyor along a direction of movement of the conveyor. Thus, there is no limitation for a maximum length of the object that can be measured. Further, the camera used in the present invention is a camera with built-in illumination system such as IR, Laser etc. As a result, video of the objects can be captured even when there is no light. This also eliminates the use of external IR illumination systems and saves system space. Further, for each object of the one or more objects, a segment length is determined for each segment corresponding to the object. Further, the video of the section of the conveyor is segmented, at a predetermined frame rate, to obtain a plurality of frames. Further, only a reference window of predetermined dimension and location is processed from each of the plurality of frames. As a result, a lot of processing power is saved and an improved yet an efficient dimension measurement system is obtained. Further, decreasing the dimension of the reference window results into increased efficiency. Thus, allowing measurement of dimensions of irregular and infinitesimal objects.
[0017] Further, while different segments of the object are
determined, two consecutive segments are compared to each other such that a bottom pixel of a first segment matches a top pixel of a second segment. In this way, the dimension measurement system allows to correctly identify same object from other objects measurement of multiple objects of different objects at the same time.
[0018] The present subject matter is further described with
reference to Figures 1 to 4. It should be noted that the description and figures merely illustrate principles of the present subject matter. Various arrangements may be devised that, although not explicitly described or shown herein, encompass the principles of the present subject matter. Moreover, all statements herein reciting principles, aspects, and examples of the present subject matter, as well as specific examples thereof, are intended to encompass equivalents thereof.
[0019] Figure 1 illustrates a block diagram of a dimension
measurement system 100, according to an example implementation of the present subject matter. The dimension measurement system 100, hereinafter referred to as system 100, may be used to measure dimension of an object, such as wood logs, cashew nuts, rice, tobacco leaves, and wheat grains.
[0020] In one implementation, the system 100 includes a conveyor
unit 102 having a conveyor 104. The conveyor 104 is to convey one or more objects from a first end to a second end of the conveyor 104 in a direction of movement of the conveyor 104. In one example, the direction of movement of the conveyor 104 is along a longitudinal axis of the conveyor 104.
[0021] The system 100 further includes an image capturing unit 106
to capture a video of a predetermined section of the conveyor 104 in real-time. In one example, the section of the conveyor 104, being filmed by the image capturing unit 106, is selected such that each of the one or more
objects pass through the section of the conveyor 104 when the video is being captured. In an example, an image capturing unit 106 may be positioned in vicinity of the conveyor 104 at a predetermined angle with respect to the conveyor.
[0022] The system 100 further includes a video processing unit 108
to process the video for determining the dimension of the one or more objects. In one example, the video processing unit includes a video processing module 110 to fragment the captured video at a predetermined frame rate to obtain a plurality of frames. The video processing module 110 may further process each of the plurality of frames to identify a reference window of a predetermined dimension and location within each frame. In an example, a segment of at least one object from the one or more objects is captured within each reference window.
[0023] The video processing unit 108 further includes a dimension
measurement module 112 to determine dimensions of each of the one or more objects. In an example, for each object from the one or more objects, the dimension measurement module 112 may identify one or more reference windows having a segment of the object, to obtain a plurality of segments for the object. Further, for each of the plurality of segments of the object, the dimension measurement module 112 may determine, a corresponding segment width based on a segment top edge width and a segment bottom edge width. The segment top edge width is the width of a top edge of the segment within the reference window. The segment bottom edge width is the width of a bottom edge of the segment within the reference window. Subsequently, the dimension measurement module 112 may obtain an average width of the object based on the segment width obtained for each segment of the object.
[0024] Figure 2 illustrates object handling environment 200
implementing the system 100, according to an example implementation of the present subject matter. In one example, the object handling
environment 200 may be a storage and manufacturing unit of an industry, such as paper making industry, tobacco manufacturing industries, rice and wheat grains processing industries. In another example, the object handling environment may be a regulatory compliance unit.
[0025] The object handling environment 200 may include the
system 100 for measuring dimensions of objects, such as raw material and/or finished goods within the object handling environment 200. For instance, the system 100 may be used for measuring dimensions of objects, such as wood logs, tobacco leaves, bricks, tiles, cashew nuts, rice and wheat grains.
[0026] As previously described, the system 100 may include the
conveyor unit 102, the image capturing unit 106 and video processing unit 108. Examples of the video processing unit 108 include, but are not limited to, desktop computers, laptops, tablets, portable computers, workstation, mainframe computer, servers, and network servers. The present approaches may also be implemented in other types of video processing unit 108 without deviating from the scope of the present subject matter. Examples of the image capturing unit 106 include, but are not limited to, a compact digital camera, a DSLR camera, a mirrorless camera, and an analog camera. In an example, the image capturing unit 106 may be an infrared vision camera. In an example, the camera has a 1/3” progressive scan CMOS image sensor with a lens of 2.8 millimeter (mm), 108 degree horizontal field of view, IR cut filter facility, IR range of about 30 meters and capable of communicating via TCP/IP protocol.
[0027] In an example implementation, the image capturing unit 106
and the video processing unit 108 may be connected with each other over a communication network 202.
[0028] The communication network 202 may be a wireless network,
a wired network, or a combination thereof. The communication network 202 can also be an individual network or a collection of many such
individual networks, interconnected with each other and functioning as a single large network, e.g., the Internet or an intranet. The communication network 202 can be one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), and the internet. In an example, the communication network 202 may include any communication network that use any of the commonly used protocols, for example, Hypertext Transfer Protocol (HTTP), and Transmission Control Protocol/Internet Protocol (TCP/IP). In an example, the communication network may use armored fiber optic cable. In an example, the fiber optic cable is a 6F SM G.652 Unitube Armoured O.F CABLE. In one example implementation, the communication network 202 may include a media convertor 204 to convert data from ethernet cable, used to connect the image capturing unit 106 and the video processing unit 108, to communication network 202 to a data format compatible for transmission over optical fiber and vice versa, for loss less transmission over long distance. Further, the use of optical fiber facilitates high data transmission rate and low noise.
[0029] The processor(s) may include microprocessors,
microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any other devices that manipulate signals and data based on computer-readable instructions. Further, functions of the various elements shown in the figures, including any functional blocks labeled as “processor(s)”, may be provided through the use of dedicated hardware as well as hardware capable of executing computer-readable instructions.
[0030] In one example, the image capturing unit 106 includes
input/output (I/O) interface(s) 206, memory 208, and processor(s) 210. The I/O interface(s) 206 may facilitate communication between the image capturing unit 106, the video processing unit 108, and various other computing devices connected in a network environment. The memory 208
may store videos of a section of the conveyor 104 captured by the image capturing unit 106. The memory 208 may include any non-transitory computer-readable medium including, for example, volatile memory such as RAM, or non-volatile memory such as EPROM, flash memory, and the like. The image capturing device 106 further includes infrared illumination source 208 and imaging device 212.
[0031] The infrared illumination source 212 emits light in infrared
spectrum to allow the image capturing unit 106 to capture video even if no light is present. Examples of infrared illumination source 212 may include, but are not limited to, infrared LED’s, infrared lasers and filtered incandescent lamps. The image capturing unit 106 further includes imaging device 214 to capture a video of a section of the conveyor 104 having one or more objects whose dimension is to be measured.
[0032] In one example, the video processing unit 108 may be
controlled by an administrator responsible for handling objects within the object handling environment 200, to facilitate the process of dimension measurement of objects placed on the conveyor 104. The video processing unit 108 includes processor(s) 216, input/output (I/O) interface(s) 218, and memory 220.
[0033] The I/O interface(s) 218 may facilitate communication
between the image capturing unit 106, the video processing unit 108, and various other computing devices connected in a network environment. The interface(s) 218 may also provide a communication pathway for one or more components of the system 100. Examples of such components include, but are not limited to, input device, such as keyboards, computer mice, and a touch enabled graphical user interface.
[0034] The memory 220 may store one or more computer-readable
instructions, which may be fetched and executed to provide interfaces to users for providing measurement instructions. The video processing unit 108 further includes module(s) 222 and data 224.
[0035] The module(s) 222 may include routines, programs, objects,
components, and data structures, which perform particular tasks or implement particular abstract data types. The modules may also be implemented as, signal processor(s), state machine(s), logic circuitries, and/or any other device or component that manipulate signals based on operational instructions. Further, the modules can be implemented by hardware, by computer-readable instructions executed by a processing unit, or by a combination thereof.
[0036] In an example, the module(s) 222 may include the
width(dimension) measurement module 112, the video processing module 110, a communication module 226, and other module(s) 228. The other module(s) 228 may include programs or coded instructions that supplement applications and functions, for example, programs in an operating system of the system 100. The data 224 includes data that is either stored or generated as a result of functionalities implemented by any of the module(s) 220. Further, the data 224 may include dimension measurement data 230, video processing data 232, image capturing data 234, and other data 236.
[0037] As previously discussed, the system 100 may be
implemented to measure dimension of one or more objects handled in the object handling environment 200. In one example, the system 100 may be located in the object handling environment 200, near a storage area of the objects and be remotely by the administrator. In another example, the system 100 may be located in the object handling environment 200, at a remote location from the storage area of the objects and be operated by the administrator.
[0038] For the sake of brevity and clarity, further description of
Figure 2 is provided in conjunction with Figure 3. Figures 3A-3D illustrate different stages of operation of the dimension measurement system 100 according to an example implementation of the present subject matter.
[0039] In operation, to measure dimensions of one or more objects,
each object may be placed on the conveyor 104 of the conveyor unit 102. In one example, the one or more objects may be placed on the conveyor 104, such as the conveyor belt of the conveyor unit 102 by an operator of the object handling environment 200. The conveyor unit 102 may convey the object from a first end to a second end of the conveyor 104. In an example, the conveyor 104 runs at a predetermined speed. In another example, the object placed at the first end of the conveyor moves along a direction of movement of the conveyor. For example, as illustrated in Figure 3A, a first object 302 is placed over the conveyor 104 of the conveyor unit 102.
[0040] As the objects are being conveyed on the conveyor, moment
of each is captured by the image capturing unit 106. In one example, the image capturing unit 106 captures a video of a predetermined section, such a section 304 of the conveyor 104. In an example, the image capturing unit 106 is positioned in vicinity of the conveyor, at a predetermined angle with respect to the conveyor 104. In an example, the predetermined angle may be 90 degrees with respect to the conveyor 104. In an example, the image capturing unit 106 may be positioned to be substantially perpendicular to the longitudinal axis of the conveyor 104, such that the predetermined angle may be between 80- 100 degrees with respect to the longitudinal axis of the conveyor 104.
[0041] In an example, the imaging device 214 of the image
capturing unit 106 captures the video of the predetermined section, say, the section 304 of the conveyor 104, such that each of the one or more objects passes through the section of the conveyor 104 when the video is being captured. In an example, the imaging device 214 may capture video at a predetermined frame rate. Further, the image capturing unit 106 may activate the infrared illumination source 212 to illuminate the conveyor 104 while the video is being captured by the imaging device 214, to allow the
image capturing unit 106 to capture the video in all conditions of ambient lighting, such as in darkness or hazy environment.
[0042] Once the video is captured, the image capturing unit 106
may store the video in memory 208 or communicate the video to the video processing unit 108 over the communication network 202. In an example, the image capturing unit 106 may initially share the video with the media convertor 204, through the I/O interfaces 206, for converting the video to a format that may be easily communicated over the communication network 202 using various techniques or protocol, such as using fiber optics cables. In one example, the media convertor 204 may reconvert the video to a video format compatible to be processed by the video processing unit 108, once the video is received at a video processing unit end of the communication network 202.
[0043] In one example, the video processing unit 108 may receive
the video and save the video in the video processing data 232. The video processing module 110 may then fragment the video at a predetermined frame rate to obtain a plurality of frames. In an example, the fragmentation of the video is performed at a predetermined frame rate such that the whole object is obtained in the plurality of frames. In an example, the predetermined frame rate may be determined in frames per second.
[0044] The predetermined frame rate may be determined based at
least on a conveyor speed and the predetermined dimension and location of a reference window. An exemplary function for determining the number of frames per second is provided in equation 1 below:
Number of frames per second = S/W (1)
where S= s*K, K= 1 meter of the conveyor in pixels, s= speed of the conveyor, and W= width of the reference window (in Pixel).
[0045] Each of the plurality of frames obtained upon fragmentation
are further processed by the video processing module 110 to identify a
reference window corresponding to the object. In an example, the video processing module 110 may identify a reference window, such as a reference window 306 of a predetermined dimension and location within each frame. In an example, the predetermined dimension and location of the reference window for each of the plurality of frames may be few pixel wide, for instance in a range of 10-20 pixels. In an example, the dimension of the reference window may be stored directly in the video processing data 232. In another example, the administrator may input the dimension of the reference window during calibration of the system 100. In an example, for each of the plurality of frames, the reference window laterally spans across the lateral cross-section of the conveyor 104, as illustrated in Figures 3a and 3b.
[0046] For each reference window, a corresponding segment of at
least one object of the one or more objects is captured. For example, as illustrated in Figure 3B, the reference window 306 contains a segment 308 of the first object 302 placed on the conveyor. Thus, only a reference window is processed for each of the plurality of frames rather than the complete frame, thereby making processing faster. Once the reference windows for each of the frames is identified, the reference windows are further processed by the dimension measurement module 112 for determining the dimensions of the objects.
[0047] The dimension measurement module 112 may initially
process the reference windows to identify different segments of the one or more objects within the reference window. In one example, the dimension measurement module 112 process each reference windows to detect two or more edge lines within the reference window using, for example, an edge detection algorithm. The edge lines may indicate probable edges of the one or more objects captured in the reference window and may be used to segregate the different objects captured in the reference window. For example, the dimension measurement module 112 may detect the
edge lines 314, 316, 318, 320, 322, 324 from the surface of the conveyor 104, as illustrated in the Figure 3B. In an example, the edge detection algorithm may be a line segment detector. In an example, the edge lines may be referred to as probable edges of segments of the object. In an example, the edge detection algorithm may include partitioning the image of the object into line support regions by grouping connected pixels that share the same gradient angle up to a certain tolerance and find the line segment that best approximates each line support region. Further, based on information in the line-support region, the edge detection algorithm decides whether to validate or not validate each line segment. One of the skilled in the art will appreciate although the above mentioned steps are explained with regards to edge detection algorithm, any number of steps may be carried. Further, one of ordinary skill in the art will recognize that one or more steps may be combined and/or divided into sub-steps.
[0048] Upon the detection of the edge lines, the dimension
measurement module 112 may identify one or more segments for each of the one or more objects. In an example, the dimension measurement module 112 may determine the one or more segments for each of the one or more objects based on pixel difference between a region corresponding to the segment and a region corresponding to the conveyor in the reference window. To obtain the pixel difference, the dimension measurement module 112 may compare pixels properties, such as pixel color of the surface region identified to be probably corresponding to an edge line and a neighboring region around the surface region identified to be probably corresponding to the edge line. In an example, the corresponding segment of the object may be identified only when a pixel difference of the region corresponding to a segment and the region corresponding to the conveyor in a reference window is more than a threshold value. For example, as illustrated in Figure 3B, the segments 308, 310 and 312 are distinguished from the corresponding regions of the conveyor 104 based on the pixel difference between the segment and the
neighboring pixels. Edge lines 314, 316, 318, 320, 322, and 324 defines the segments 308, 310 and 312, while the dashed lines define the region corresponding to the conveyor
[0049] For each of the one or more segments within the reference
window, the dimension measurement module 112 further ascertains a top edge and a bottom edge. In an example, the top edge and the bottom edge are ascertained based on a top right edge, a top left edge, bottom right edge, bottom left edge within the reference window. For example, as illustrated in Figure 3C, a top edge 326 and a bottom edge 328 is ascertained for a segment 308 of a reference window 306.
[0050] The dimension measurement module 112 further compares
the bottom edge with a consecutive top edge of each of the one or more segments within the reference window to identify the one or more reference windows having segments corresponding to same object. In an example, two consecutive windows refer to reference windows obtained from two sequentially consecutive frames in accordance to a sequence of frames captured in the video. For example, as illustrated in Figure 3D, a bottom edge 328 of segment 308-1 within the reference window 306-1 is compared with a top edge 330 of the segment 308-2 within the reference window 306-2, to identify that the segment 308-1 and 308-2 captured in the reference window 306-1 and 306-2 belong to the same object, say, the first object 302. In an example, the two consecutive reference frames refer to reference windows obtained from two sequentially consecutive frames in accordance to a sequence the frames were captured in the video.
[0051] Thereafter, for each object, the dimension measurement
module 112 may then identify all the reference windows having segments corresponding to an object. For example, the dimension measurement module 112 may identify all reference windows 306 having the segments 308 (308-1, 308-2 …) for the first object 302. Similarly, the dimension measurement module 112 may identify reference windows 306 having the
segments 310 (310-1, 310-2…) and segments 312 (312-1, 312-2…) for the second object and the third object respectively. Once all the reference windows having segments corresponding to an object are determined, the dimension measurement module 112 then determines an average segment width based on the segment bottom edge width and the segment top edge width for each reference window. In an example, the average segment is referred to as the average diameter of the corresponding object.
[0052] For each segment corresponding to the object, the
dimension measurement module 112 further determine a segment length as a distance between the segment top edge and the segment bottom edge within the reference window incorporating the segment. In an example, an object length is determined as a sum of segment length obtained for each segment of the object. In an example, the object length and diameter may be obtained in pixels. In another example, the object length and diameter may be obtained in SI unit of length, such as meters. In yet another example, the object’s length and diameter may be obtained in any known physical quantity of length.
[0053] In an example, the object length and width may be converted
from pixels to SI unit of length. To facilitate the conversion, the system 100 may be calibrated prior to usage. An A4 size sheet whose dimensions are known in centimeters is placed on the conveyor. The four corners of the A4 sheet are mapped with an image, thus giving 2 lengths and 2 breadths in terms of pixel length. Based on this, an administrator may calculate conversion factor that will be used to convert diameter in pixels to SI unit of length. In an example, the conversion factor may be saved on dimension measurement data 230 after calibration. In another example, the administrator may manually input the conversion factor into the dimension measurement data 230.
[0054] Figure 4 illustrate example method 400 for measuring
dimension of one more objects. The order in which the method is described is not intended to be construed as a limitation, and any number of the described method blocks may be combined in any order to implement the methods, or an alternative method. Furthermore, method 400 may be implemented by processing resource or computing device(s) through any suitable hardware, non-transitory machine readable instructions, or combination thereof.
[0055] It may also be understood that method 400 may be
performed by programmed computing devices, such video processing unit 108 and image capturing unit 106, as depicted in Figures 1-3. Furthermore, the method 400 may be executed based on instructions stored in a non-transitory computer readable medium, as will be readily understood. The non-transitory computer readable medium may include, for example, digital memories, magnetic storage media, such as one or more magnetic disks and magnetic tapes, hard drives, or optically readable digital data storage media. The method 400 is described below with reference to the image capturing unit 106 and video processing unit 108 as described above; other suitable systems for the execution of these methods may also be utilized. Additionally, implementation of this method is not limited to such examples.
[0056] Figure 4 illustrates the method 400 for measuring dimension
of one or more objects, according to an example implementation of the present subject matter. At block 402, a video of a predetermined section of a conveyor in a conveyor unit is captured. In an example, the conveyor unit comprises a conveyor to convey one or more objects from a first end to a second end of the conveyor along a longitudinal axis of the conveyor, at a predetermined speed, in a direction of movement of the conveyor. In an example, each of the one or more objects are to pass through the section of the conveyor when the video is being captured.
[0057] At block 404, the video is fragmented at a predetermined
frame rate to obtain a plurality of frames.
[0058] At block 406, each of the plurality of frames are processed to
identify a reference window of a predetermined dimension and location within each frame. In an example, within each reference window a segment of at least one object from the one or more objects is captured.
[0059] At block 408, for each object from the one or more objects,
one or more reference windows having a segment of the object is identified.
[0060] At block 410, for each of the plurality of segments for the
object, a corresponding segment width is determined based on a segment top edge width and a segment bottom width. In an example, the segment top edge width is the width of a top edge of the segment within the reference window, and the segment bottom edge width is the width of a bottom edge of the segment within the reference window.
[0061] At block 412, for each of the one or more objects, an
average width of the object is determined based on the segment width obtained for each segment of the object.
[0062] Although examples for the present subject matter have been
described in language specific to structural features and/or methods, it should be understood that the appended claims are not limited to the specific features or methods described. Rather, the specific features and methods are disclosed and explained as examples of the present subject matter.
I/We Claim:
1. A system comprising:
a conveyor unit comprising a conveyor to convey one or more objects from a first end to a second end of the conveyor along a longitudinal axis of the conveyor, in a direction of movement of the conveyor;
an image capturing unit positioned in a vicinity of the conveyor to capture a video of a predetermined section of the conveyor, wherein each of the one or more objects are to pass through the section of the conveyor when the video is being captured; and
a video processing unit comprising:
a video processing module to:
fragment the video at a predetermined frame rate to obtain a plurality of frames; and
process each of the plurality of frames to identify a reference window of a predetermined dimension and location within each frame, wherein within each reference window a segment of at least one object from the one or more objects is captured; and
a dimension measurement module to determine dimensions of each object, wherein for each object from the one or more objects, the dimension measurement module is to:
identify one or more reference windows having a segment of the object, to obtain a plurality of segments for the object;
for each of the plurality of segments for the object, determine a corresponding segment width based on a segment top edge width and a segment bottom edge width, wherein the segment top edge width is the width of a top
edge of the segment within the reference window, and wherein the segment bottom edge width is the width of a bottom edge of the segment within the reference window; and
obtain an average width of the object based on the segment width obtained for each segment of the object.
2. The system as claimed in claim 1, wherein the image capturing unit is positioned at a predetermined angle with respect to the conveyor, and wherein the image capturing unit is to capture the video at a predetermined imaging frame rate.
3. The system as claimed in claim 1, wherein the image capturing unit comprises an illumination device such as IR etc. to allow the image capturing unit to capture the video in all conditions of ambient lighting.
4. The system as claimed in claim 1, wherein the video processing module is to determine the predetermined frame rate based at least on a conveyor speed and the predetermined dimension and location of the reference window, wherein the conveyor speed and the predetermined dimension and location of the reference window are computed in pixels.
5. The system as claimed in claim 1, wherein the reference window laterally spans across the lateral cross-section of the conveyor, and wherein each reference window is to capture one or more segments, wherein the each of the one or more segment corresponds to a different object.
6. The system as claimed in claim 5, wherein for each reference window, the dimension measurement module is to:
detect two or more edge lines within the reference window, using an edge detection algorithm, wherein the edge lines are probable edges of segments of the one or more objects;
identify one or more segments in the reference window based on the edge lines, wherein a segment is identified between two consecutive edge lines based on pixel difference between a region corresponding to the segment and a region corresponding to the conveyor in the reference window;
for each of the one or more segments within the reference window, determine a top right edge, a top left edge, bottom right edge, and a bottom left edge within the reference window;
for each of the one or more segments within the reference window, ascertain a top edge and a bottom edge based on the top right edge, the top left edge, the bottom right edge, and the bottom left edge; and
compare the bottom edges of each of the one or more segment within a reference window with the top edges of one or more segments within a consecutive reference window to identify the one or more reference windows having segments corresponding to same object, wherein two consecutive reference windows refer to reference windows obtained from two sequentially consecutive frames in accordance to a sequence the frames captured in the video.
7. The system as claimed in claim 6, wherein for each of the one or
more segments within the reference window, the dimension measurement module is to:
determine the segment top edge width, as a distance between the top right edge and the top left edge;
determine the segment bottom edge width, as a distance between the bottom right edge and the bottom left edge; and
determine the segment width as an average of the segment bottom edge width and the segment top edge width.
8. The system as claimed in claim 7, wherein for each object, the
dimension measurement module is to:
determine, for each segment corresponding to the object, a segment length as a distance between the segment top edge and the segment bottom edge within the reference window incorporating the segment; and
determine an object length as a sum of segment length obtained for each segment of the object.
9. The system as claimed in claim 1, wherein the conveyor unit is to convey the one or more objects at a predetermined speed in accordance to an imaging frame rate to allow an optimum recording of the video, and wherein the one or more objects are to be placed at the first end of the conveyor to move along in the direction of movement of the conveyor.
10. A method for measuring dimension of one or more objects, the method comprising:
capturing a video of a predetermined section of a conveyor a conveyor unit comprising a conveyor to convey one or more objects from a first end to a second end of the conveyor along a longitudinal axis of the conveyor, at a predetermined speed, in a direction of movement of the conveyor; wherein each of the one or more objects are to pass through the section of the conveyor when the video is being captured;
fragmenting the video at a predetermined frame rate to obtain a plurality of frames;
processing each of the plurality of frames to identify a reference window of a predetermined dimension and location within each frame, wherein within each reference window a segment of at least one object from the one or more objects is captured;
for each object from the one or more objects, identifying one or more reference windows having a segment of the object, to obtain a plurality of segments for the object;
for each of the plurality of segments for the object, determining a corresponding segment width based on a segment top edge width and a segment bottom edge width, wherein the segment top edge width is the width of a top edge of the segment within the reference window, and wherein the segment bottom edge width is the width of a bottom edge of the segment within the reference window; and
for each of the one or more objects, obtaining an average width of the object based on the segment width obtained for each segment of the object.
11. The method as claimed in claim 10, further comprising determining the predetermined frame rate based at least on a conveyor speed and the predetermined dimension and location of the reference window, wherein the conveyor speed and the predetermined dimension and location of the reference window are computed in pixels.
12. The method as claimed in claim 10, wherein the reference window laterally spans across the lateral cross-section of the conveyor, and wherein each reference window is to capture one or more segments, wherein the each of the one or more segment corresponds to a different object.
13. The method as claimed in claim 10, wherein the processing each of the plurality of frames comprises
enhancing each of the plurality of frames using one or more image processing filters;
detecting two or more edge lines within the reference window, using an edge detection algorithm, wherein the edge lines are probable edges of segments of the one or more objects; and
identify one or more segments in the reference window based on the edge lines, wherein a segment is identified between two consecutive edge lines based on pixel difference between a region corresponding to the segment and a region corresponding to the conveyor in the reference window.
14. The method as claimed in claim 10, wherein the identifying one or
more reference windows having a segment of the object comprises:
for each of the one or more segments within the reference window, determining a top right edge, a top left edge, bottom right edge, and a bottom left edge within the reference window;
for each of the one or more segments within the reference window, ascertaining a top edge and a bottom edge based on the top right edge, the top left edge, the bottom right edge, and the bottom left edge; and
comparing the bottom edges of each of the one or more segment within a reference window with the top edges of one or more segments within a consecutive reference window to identify the one or more reference windows having segments corresponding to same object, wherein two consecutive reference windows refer to reference windows obtained from two sequentially consecutive frames in accordance to a sequence the frames captured in the video.
15. The method as claimed in claim 14, wherein the determining a
corresponding segment width further comprising:
determining the segment top edge width, as a distance between the top right edge and the top left edge;
determining the segment bottom edge width, as a distance between the bottom right edge and the bottom left edge; and
determining the segment width as an average of the segment bottom edge width and the segment top edge width.
16. The method as claimed in claim 14, wherein for each object, the
method further comprising:
determining, for each segment corresponding to the object, a segment length as a distance between the segment top edge and the segment bottom edge within the reference window incorporating the segment; and
determining an object length as a sum of segment length obtained for each segment of the object.
17. A system comprising:
a conveyor unit comprising a conveyor to convey an object from a first end to a second end of the conveyor along a longitudinal axis of the conveyor, in a direction of movement of the conveyor;
an image capturing unit positioned in a vicinity of the conveyor, at a predetermined angle with respect to the conveyor, to capture a video of a predetermined section of the conveyor, wherein the object is to pass through the section of the conveyor when the video is being captured; and
a video processing unit to:
segment the video at a predetermined frame rate to obtain a
plurality of frames;
process each of the plurality of frames to identify a reference
window of a predetermined dimension and location within each
frame, wherein within each reference window a segment of the
object is captured;
for each reference window, determine a segment width based on a segment top edge width and a segment bottom edge width, wherein the segment top edge width is the width of a top edge of the segment within the reference window, and wherein the segment bottom edge width is the width of a bottom edge of the segment within the reference window; and
obtain an average width of the object based on the segment width obtained for each reference window.
| # | Name | Date |
|---|---|---|
| 1 | 201841046028-STATEMENT OF UNDERTAKING (FORM 3) [05-12-2018(online)].pdf | 2018-12-05 |
| 2 | 201841046028-FORM 1 [05-12-2018(online)].pdf | 2018-12-05 |
| 3 | 201841046028-DRAWINGS [05-12-2018(online)].pdf | 2018-12-05 |
| 4 | 201841046028-DECLARATION OF INVENTORSHIP (FORM 5) [05-12-2018(online)].pdf | 2018-12-05 |
| 5 | 201841046028-COMPLETE SPECIFICATION [05-12-2018(online)].pdf | 2018-12-05 |
| 6 | 201841046028-Proof of Right (MANDATORY) [11-02-2019(online)].pdf | 2019-02-11 |
| 7 | 201841046028-FORM-26 [11-02-2019(online)].pdf | 2019-02-11 |
| 8 | 201841046028-FORM 18 [17-08-2020(online)].pdf | 2020-08-17 |
| 9 | 201841046028-Response to office action [24-09-2021(online)].pdf | 2021-09-24 |
| 10 | 201841046028-FER.pdf | 2021-10-17 |
| 11 | 201841046028-OTHERS [18-02-2022(online)].pdf | 2022-02-18 |
| 12 | 201841046028-FER_SER_REPLY [18-02-2022(online)].pdf | 2022-02-18 |
| 13 | 201841046028-CLAIMS [18-02-2022(online)].pdf | 2022-02-18 |
| 14 | 201841046028-FORM-26 [04-03-2022(online)].pdf | 2022-03-04 |
| 15 | 201841046028-FORM-26 [05-12-2022(online)].pdf | 2022-12-05 |
| 16 | 201841046028-Correspondence_Form 26_09-12-2022.pdf | 2022-12-09 |
| 17 | 201841046028-US(14)-HearingNotice-(HearingDate-03-12-2025).pdf | 2025-11-14 |
| 1 | 201841046028E_17-08-2021.pdf |