Specification
On-site work support system
Technical field
[0001]
The present invention relates to a technique such as an information processing system, and more particularly to a technique for supporting on-site work including agriculture.
Background technology
[0002]
At agricultural sites, for example, fields where agricultural products (sometimes referred to as objects, work objects, etc.) are cultivated (for example, greenhouses), the work of harvesting agricultural products such as tomatoes by workers. , And the work of shipping the harvested agricultural products. Social issues include a shortage of workers in the field such as agriculture, a lack of skills and experience of workers, and a burden and efficiency of work. Therefore, a mechanism for supporting on-site work such as agriculture by using IT technology including AI (artificial intelligence) is being studied.
[0003]
In recent years, in addition to smartphones and smart watches, technologies such as head-mounted displays (HMDs) including smart glasses have been developed. Therefore, a mechanism for supporting work in the field by using a device (smart device) such as an HMD is also being studied.
[0004]
As an example of the prior art related to the above, Japanese Patent No. 6267841 (Patent Document 1) can be mentioned. Patent Document 1 describes that, as a wearable terminal display system or the like, the harvest time of agricultural products is displayed on the display board of the wearable terminal.
Prior art literature
Patent documents
[0005]
Patent Document 1: Japanese Patent No. 6267841
Outline of the invention
Problems to be solved by the invention
[0006]
In the field of agriculture, etc., if the workers who perform the harvesting and shipping work are not skilled but inexperienced or low-skilled workers, it may be difficult to determine which crop should be harvested or shipped. There may be difficulty in working. The work load on the worker is heavy, and the work efficiency is not good. There are also issues such as labor shortage of workers, shortage of skilled workers, and education cost for inexperienced people. The present invention provides a mechanism capable of suitably supporting on-site work such as agriculture by using IT technology including AI and smart devices.
Means to solve problems
[0007]
A typical embodiment of the present invention has the following configurations. The on-site work support system of one embodiment is a on-site work support system for supporting on-site work including agriculture, and includes a worker terminal attached or carried by a worker, and the worker terminal is a worker terminal. The computer system that acquires the first data including the first image of a work object including agricultural products in the worker's field of view using a camera and is connected to the worker terminal or the worker terminal is the first. A third for recognizing the state of the work object based on the second data reflecting the learning of the second image of the work object by inputting one data, and supporting the work based on the recognition result. The worker terminal acquires the data, and the worker terminal has the work object in the first image associated with the field of view as an output for supporting the work to the worker based on the third data. Outputs including an output that tells you that there is.
The invention's effect
[0008]
According to a typical embodiment of the present invention, IT technology including AI and a smart device can be used to suitably support on-site work such as agriculture, for example, to reduce the cost of farming. Efficiency can be improved, and even inexperienced people can easily perform harvesting and shipping operations.
A brief description of the drawing
[0009]
FIG. 1 is a diagram showing a configuration of a field work support system according to the first embodiment of the present invention.
FIG. 2 is a diagram showing a functional block configuration of a worker terminal in the first embodiment.
FIG. 3 is a diagram showing an example of a system configuration including cooperation with a higher-level system in the first embodiment.
FIG. 4 is a diagram showing an outline of processing of a work support function in the first embodiment.
[Fig. 5] Fig. 5 is a diagram showing an outline of processing of a prediction support function in the first embodiment.
FIG. 6 is a diagram showing an outline of processing of a pest discrimination support function in the first embodiment.
FIG. 7 is a diagram showing a processing outline and the like of a work support function in a modified example of the first embodiment.
FIG. 8 is a diagram showing a defined example of maturity in the first embodiment.
FIG. 9 is a diagram showing an example of a tomato color sample in the first embodiment.
FIG. 10 is a diagram showing an example of a cucumber mold sample in the first embodiment.
FIG. 11 is a diagram showing a configuration related to an AI function in the first embodiment.
FIG. 12 is a diagram showing an example of a recognition result in the first embodiment.
FIG. 13 is a diagram showing a first example of a harvesting work support display in the first embodiment.
FIG. 14 is a diagram showing a second example of a harvesting work support display in the first embodiment.
FIG. 15 is a diagram showing a third example of a harvesting work support display in the first embodiment.
FIG. 16 is a diagram showing a fourth example of a harvesting work support display in the first embodiment.
FIG. 17 is a diagram showing a fifth example of a harvesting work support display in the first embodiment.
FIG. 18 is a diagram showing a sixth example of a harvesting work support display in the first embodiment.
FIG. 19 is a diagram showing an example of an image at the time of shipping work support in the first embodiment.
FIG. 20 is a diagram showing a first example of a shipping work support display according to the first embodiment.
FIG. 21 is a diagram showing a second example of a shipping work support display in the first embodiment.
FIG. 22 is a diagram showing a configuration example of a bird's-eye view of a field in the first embodiment.
FIG. 23 is a diagram showing a configuration example of a bird's-eye view of a field in a modified example of the first embodiment.
FIG. 24 is a diagram showing a configuration of a performance detection unit in the field work support system according to the second embodiment of the present invention.
FIG. 25 is a diagram showing the first image example in the second embodiment.
FIG. 26 is a diagram showing the second image example in the second embodiment.
FIG. 27 is a diagram showing an image example No. 3 in the second embodiment.
FIG. 28 is a diagram showing an image example No. 4 in the second embodiment.
FIG. 29 is a diagram showing an image example No. 5 in the second embodiment.
FIG. 30 is a diagram showing a configuration of a harvested determination unit in a modified example of the second embodiment.
FIG. 31 is a schematic diagram of a relative positional relationship in a modified example of the second embodiment.
Embodiment for carrying out the invention
[0010]
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
[0011]
(Embodiment 1)
The field work support system according to the first embodiment of the present invention will be described with reference to FIGS. 1 to 23.
[0012]
[On-site work support system] The on-site work support system
of the embodiment mainly includes a worker terminal 1 and a server 2 (a computer system including a server 2). The person who uses this system includes a worker W1 and an instructor W2. The worker W1 uses a worker terminal 1 which is a smart device (in other words, a portable information device). The worker terminal 1 is a mobile terminal 1A or a wearable terminal 1B used by the worker W1 in the field. The worker W1 carries a mobile terminal 1A or wears a wearable terminal 1B. The worker W1 uses one or both of the mobile terminal 1A and the wearable terminal 1B. The mobile terminal 1A is, for example, a smartphone or a tablet terminal. The wearable terminal 1B is an HMD including, for example, a smart watch or smart glasses. The worker terminal 1 includes a work support function 41. The work support function 41 is a function that outputs work support information to the worker W1 in cooperation with the function of the server 2.
[0013]
Worker W1 is a worker who performs harvesting work and shipping work of agricultural products as agricultural work in a field such as a field (vinyl house or the like). The worker W1 may be a foreign worker or a student. The worker W1 may be an expert. The instructor W2 is a person who gives an instruction related to agricultural work to the worker W1, for example, an employer in a farm farm or a person such as JA (agricultural cooperative). The work object 3 is a crop, an object of work support, and specific examples are tomatoes and cucumbers.
[0014]
JA is engaged in farming guidance, management, support, purchasing, and other businesses. JA wants to grasp the harvesting and shipping conditions (actual results, forecasts, etc.) of each agricultural product of each farmer with as high accuracy as possible. However, conventionally, JA requires a great deal of labor and cost for that purpose.
[0015]
This system uses the smart device (worker terminal 1) of the worker W1 to support the work by the worker W1. The worker W1 sees the work object 3 via the worker terminal 1. The worker terminal 1 captures an image including the work object 3. This system acquires an image of the worker terminal 1 and uses it for work support. This system determines the state of the work object 3 in real time through the image of the worker terminal 1 and provides work support such as work guidance. This system outputs work support information to the worker W1 through the worker terminal 1. The output is not limited to the image display, but includes audio output, vibration and light output.
[0016]
The system can be applied at least during harvesting and shipping operations, including the selection of crops (especially fruits and vegetables). The sorting is, for example, the judgment and sorting of the maturity, grade, actual size, shape, state of pest damage, etc. of the crop. The work support information to be output is at least harvest support information (in other words, harvest target instruction information, such as whether or not harvesting is possible or whether or not harvesting should be performed for the crops visible to the worker W1. Harvest target discrimination support information, etc.) is included.
[0017]
This system can be easily used by the worker W1 simply by carrying or wearing the worker terminal 1. Even if the worker W1 is not an expert but an inexperienced person (beginner, low-experienced person, etc.) with low work skills and experience, the work of harvesting and shipping can be easily performed according to the work support. With the above work support, this system can efficiently provide the skills, experience, know-how, etc. of skilled workers to inexperienced people and promote education. Inexperienced people can improve their work skills.
[0018]
This system implements and reflects the skills and experience of skilled workers involved in the work as learning by the AI function 20, and provides the worker W1 as work support. This system has an AI function 20 and the like in a computer system such as a server 2 that cooperates with the worker terminal 1. The AI function 20 is a function including image analysis, machine learning, and the like. As an example of the AI function 20, machine learning such as deep learning is used. The AI function 20 inputs image data of the work object 3 and performs machine learning. The image may be an image of an actual crop taken by the camera of the worker terminal 1, or an image of a color sample or the like described later. The AI function 20 updates the learning model related to recognition by learning the input image.
[0019]
The AI function 20 recognizes the state such as the maturity of the crops shown in the image from the input image, and outputs the recognition result. The worker terminal 1 outputs work support information, for example, harvest work support information to the worker W1 by using the recognition result of the AI function. The worker terminal 1 displays, for example, an image showing an object to be harvested on the display surface 5. The worker W1 can easily perform work such as harvesting according to this work support information.
[0020]
A mobile terminal 1A such as a smartphone includes a display surface 5 such as a touch panel, a camera 6, and the like. The camera 6 includes an in-camera 61 and an out-camera 62. The worker W1 uses the camera 6 to take a picture of the work object 3. Work support information and the like are displayed on the display surface 5 of the mobile terminal 1A. Further, the mobile terminal 1A outputs the voice corresponding to the work support information from the speaker, and controls the vibration and the light corresponding to the work support information.
[0021]
A wearable terminal 1B such as an HMD includes a display surface 5, a camera 6, and the like. The camera 6 includes a camera for detecting the line of sight, a camera for a distance measuring sensor, and the like. In the case of an HMD or the like, the wearable terminal 1B also comes with an operating device 9 that communicates with the main body. The operator W1 can also operate the actuator 9 by holding it in his hand. The display surface 5 may be a transmissive type or a non-transparent type (VR type). The display surface 5 of the HMD corresponds to the range of the user's field of view, and an image by AR or the like (may be described as a virtual image or the like) corresponding to the work support information is displayed on the real image of the work object 3 in the real space. It is superimposed and displayed.
[0022]
A computer system including a server 2, a DB, a PC, and the like of a business operator includes a management function 40 and an AI function 20. The management function 40 is a function for registering and managing information about sites such as a plurality of workers W1 and an instructor W2 who are users, and a plurality of fields. For example, when a farmer has a plurality of workers W1 and a plurality of fields, the management function 40 collectively manages the information. This system can be implemented, for example, in a server 2 such as a data center on a communication network or a cloud computing system. The worker terminal 1 communicates with and cooperates with the server 2 and the like via the communication network. The server 2 and the like manage, collect, and share the data of each user, and provide support to each site and each worker W1. This system shares processing between a worker terminal 1 and a computer system including a server 2 and the like. Various forms of sharing are possible, and the first embodiment shows an example. In the first embodiment, the computer system is in charge of the processing of the AI function 20 having a relatively large calculation processing load.
[0023]
This system not only supports harvesting and shipping work, but also provides other support described later, based on images through the worker terminal 1. As other support, this system provides support for detecting and coping with the state of pests in crops. This system provides pest discrimination support information, pesticide spraying judgment information, etc. as support information. The system also assists in predicting crop yields (or shipments) as another aid. This system provides forecast information such as expected harvest amount as support information.
[0024]
[Worker terminal 1]
FIG. 2 shows a functional block configuration of the worker terminal 1. In this example, the case where the worker terminal 1 is an HMD including smart glasses is shown. The worker terminal 1 includes a processor 101, a memory 102, a display device 50, a camera 6, a sensor 7, a communication device 80, a microphone 81, a speaker 82, an operation button 83, and a battery 84, and these are mutually via a bus or the like. It is connected. The worker terminal 1 includes an operation device 9 and communicates with the communication device 80 and the operation device 9.
[0025]
The processor 101 controls the whole and each part of the worker terminal 1. The worker terminal 1 is a processing unit configured by using hardware and software including a processor 101, which includes a photographing unit 11, an object recognition unit 12, an object selection unit 13, a display control unit 14, a voice notification unit 15, and vibration. It has a notification unit 16, an optical notification unit 17, and the like.
[0026]
The memory 102 stores data and information handled by the processor 101. The memory 102 stores a control program 110, an application program 120, setting information 130, captured image data 140, virtual image data (in other words, work support data) 150, and the like. The control program 110 is a program that realizes the work support function 41 and the like. The application program 120 is various programs originally provided in the HMD. The setting information 130 is system setting information and user setting information. The captured image data 140 is data of an image captured by the camera 6. The virtual image data 150 is data for displaying an image of work support information on the display surface 5.
[0027]
The display device 50 is, for example, a projection type display device, and projects and displays an image on a lens surface constituting the display surface 5. It is applicable not only to the projection type display device. The display device 50 is a touch panel or the like in the case of the mobile terminal 1A. The camera 6 includes one or more cameras that capture the front direction of the field of view of the user W1. The camera 6 includes a camera constituting a line-of-sight detection sensor and a camera constituting a distance measuring sensor. Examples of the sensor 7 include a known GPS receiver, a geomagnetic sensor, an acceleration sensor, a gyro sensor, and the like. The worker terminal 1 detects the position, direction, acceleration, etc. of the worker terminal 1 or the worker W1 by using the detection information of the sensor 7, and uses it for control.
[0028]
The communication device 80 corresponds to various communication interfaces, and is a part that performs wireless communication with the server 2, short-range communication with the controller 9, and the like. The microphone 81 may include a plurality of microphones, and is a voice input device for inputting and recording voice. The speaker 82 may include a plurality of speakers, and is an audio output device that outputs audio. The operation button 83 accepts the input operation of the operator W1. The battery 84 supplies electric power to each part based on charging.
[0029]
[System Configuration Example and Each Function]
FIG. 3 shows a system configuration example including cooperation between the field work line-of-sight system and the host system according to the first embodiment of FIG. 1, and a configuration outline of each function of the host system. The system configuration example of FIG. 3 has an instruction system 201, a prediction system 202, and a growth management system 203 in addition to the worker terminal 1 and the server 2. The configuration of the instruction system 201, the prediction system 202, and the growth management system 203 is not particularly limited. The configuration example of FIG. 3 shows a case where the instruction system 201, the prediction system 202, and the growth management system 203 are implemented in the computer system including the server 2 of the business operator. In other words, the computer system has a built-in instruction system 201 and the like. Not limited to this, a host system such as the instruction system 201 may be externally connected to the computer system including the server 2 by communication.
[0030]
The instruction system 201 includes a harvest instruction system and a shipping instruction system. The instruction system 201 receives an input of a work instruction from the instructor W2. The work instruction is, for example, an instruction accompanied by designation of maturity or grade as a harvesting work instruction. The instruction system 201 configures and gives a work instruction to the work support function 41 based on the work instruction from the instructor W2. The work instruction includes display target information for work support information. For example, when the maturity is specified, the display target information is information representing the harvest target corresponding to the specified maturity.
[0031]
In this system, a person other than the worker W1 at the site can be used as the instructor W2 to give a work instruction to the system (for example, the server 2 or the worker terminal 1), and the work support information (work support information according to the work instruction) can be given. Alternatively, work instruction information) can be generated and output. This work support information (or work instruction information) is, for example, information that directly specifies a harvest target. The work instruction from the higher-level instructor W2 is, for example, information that indirectly expresses the harvested object, and is information that specifies, for example, the degree of maturity. If a certain maturity is specified, all crops corresponding to that maturity state will be harvested. Such a work instruction can be similarly applied when a grade, an actual size, or the like is specified at the time of shipping work. Through the instruction system 201, this system can give a harvest target instruction, etc. according to the work instruction from the instructor W2 as work support, so that the worker W1 does not hesitate about harvesting, etc. at the site, or It can be hard to get lost.
[0032]
More specifically, the work support function 41 includes a harvest work support function 41A and a shipping work support function 41B. The harvesting work support function 41A is a function for outputting support at the time of harvesting work. The shipping work support function 41B is a function that outputs support at the time of shipping work. The work support function 41 outputs work support information according to the work instruction information from the instruction system 201. For example, the work support function 41 displays a work support image on the display surface 5. This image is an image showing a work target and an image for sorting support. The work support function 41 also grasps the position of the worker terminal 1 and the like.
[0033]
The prediction system 202 is a system for predicting the expected harvest amount, the expected shipment amount, and the like. The forecasting system 202 includes a harvest forecasting system and a shipping forecasting system. The worker terminal 1 includes a prediction support function 42. The prediction support function 42 cooperates with the prediction system 202. The prediction support function 42 uses information such as the recognition result processed by the work support function 41, and transmits information such as the quantity of the object at the present time (for example, the quantity for each maturity) to the prediction system 202. The prediction system 202 predicts the expected harvest amount and the like using the information, and outputs the prediction information which is the prediction result. The prediction system 202 outputs the prediction information to a system such as the instructor W2 or JA. Further, the prediction system 202 may respond to the worker terminal 1 with the prediction information. The prediction support function 42 may output the prediction information as a harvest prediction on the display surface 5.
[0034]
The prediction support system 42 may grasp the harvest record or the like by performing a predetermined process using information such as the quantity (recognition amount) of the object at the present time in the recognition result. This system can measure the actual results of harvesting and shipping as information because the quantity of the target object can be known as the crops are harvested and shipped with work support. This system may record the performance information and output it to a higher-level system or an instructor W2 or the like.
[0035]
The growth management system 203 is a system for managing the growth of crops in the field. The growth management system 203 includes a pest determination system and a pesticide spray determination system. The worker terminal 1 includes a pest discrimination support function 43. The pest discrimination support function 43 cooperates with the growth management system 203. The pest discrimination support function 43 uses information such as the recognition result processed by the work support function 41, and transmits information on the state of the pest of the object to the growth management system 203. The growth management system 203 uses the information to determine countermeasures such as pesticide spraying, fertilization, and removal, and outputs countermeasure information. The coping information includes, for example, pesticide spraying instruction information. The growth management system 203 outputs coping information to a system such as the instructor W2 or JA. The growth management system 203 may respond to the worker terminal 1 with the coping information. The worker terminal 1 outputs work support information indicating the state of pests to the display surface 5. Further, the worker terminal 1 outputs work support information such as a pesticide spraying instruction to the display surface 5 based on the coping information from the growth management system 203. The machine learning learning model of the AI function 20 can collectively recognize (estimate, etc.) the position, maturity, grade, state of pests, and the like of an object. In other forms, the AI function 20 may use a different machine learning learning model for each maturity level and pest state.
[0036]
[Work support function]
FIG. 4 shows an outline of processing of the work support function 41 in the configuration of cooperation between the worker terminal 1 and the server 2. Along with the work of the worker W1, the photographing unit 11 of the worker terminal 1 photographs the work object 3 by using the camera 6 and obtains an image (corresponding image data). Images include still images and moving images. The photographing unit 11 stores the image as captured image data 140. The shooting is a photo shoot with visible light. Further, the worker terminal 1 acquires information such as the date and time, the position, and the direction by using not only the image but also the sensor 7 and the like at the time of shooting. The position is, for example, position information (for example, latitude and longitude) that can be acquired by positioning with a GPS receiver, but is not limited to this, and may be a position acquired by other positioning means. The direction corresponds to the front direction of the worker W1, the front direction of the worker terminal 1, the shooting direction of the camera 6, and the like, and can be measured by, for example, a geomagnetic sensor or a line-of-sight detection sensor.
[0037]
The object recognition unit 12 receives an image from the photographing unit 11, performs recognition processing on the state of the work object 3 using the AI function 20, and obtains a recognition result. The image data (input image) input to the object recognition unit 12 includes information such as an image ID, date and time, position, and direction. The object recognition unit 12 transmits the input image to the AI function 20 together with the request. The AI function 20 includes an image analysis and machine learning module configured in a computer system. Machine learning includes learning models and uses, for example, deep learning. The AI function 20 performs recognition processing from the input image and outputs the recognition result. The recognition result includes information on the state such as the position and maturity of each object in the image. The AI function 20 transmits a response including the recognition result to the object recognition unit 12 of the worker terminal 1. The object recognition unit 12 stores the recognition result and passes it to the object selection unit 13.
[0038]
The target selection unit 13 selects a part of information as a display target from the recognition result of the object recognition unit 12. In other words, the target selection unit 13 extracts, limits, narrows down, filters, or the like a part of the information from the recognition result. The target selection unit 13 selects whether to use all the recognition result information or a part of the information. The target selection unit 13 can make a selection using, for example, a maturity level or a grade. The target selection unit 13 can make a selection according to, for example, a user setting or a user instruction. Some of the information selected is, for example, harvest instruction information for only the harvest target, and the harvest instruction information includes, for example, an image showing the harvest target and not including an image representing the non-harvest target.
[0039]
When there is a work instruction from the instruction system 201, the target selection unit 13 makes a selection according to the work instruction. The target selection unit 13 selects from the recognition results so that, for example, only the target corresponding to the maturity of the harvest target can be notified based on the work instruction from the instruction system 201. The instruction system 201, for example, gives the harvest target information corresponding to the work instruction from the instructor W2 to the target selection unit 13. This work instruction (harvest target information) is, for example, harvest instruction information including designation of the maturity of the harvest target, for example, information such as "the harvest target is a tomato with a maturity of 3 or more". Another example is "harvest tomatoes with a maturity of 1". The maturity level or the like specified by the instructor W2 or the instruction system 201 is determined by an arbitrary mechanism. As an example, the maturity level is determined based on the shipping plan, order information, etc., in consideration of the transportation distance, etc.
[0040]
In the case of the mobile terminal 1A, the display control unit 14 draws an image (virtual image) showing the position and maturity of the selection result from the target selection unit 13 in the image obtained by the shooting unit 11. In the case of the wearable terminal 1B, the display control unit 14 superimposes and displays an image (virtual image) showing the position and maturity of the selection result on the display surface 5.
[0041]
Further, as a processing unit linked with the display control unit 14, it has a voice notification unit 15, a vibration notification unit 16, and an optical notification unit 17. The voice notification unit 15 outputs voice for work support from the speaker 82. The vibration notification unit 16 outputs vibration for work support. The light notification unit 17 controls light emission for work support.
[0042]
[Prediction Support Function]
FIG. 5 shows an outline of processing of the prediction support function 42 in the configuration of cooperation between the worker terminal 1 and the server 2. In particular, the case of providing support for predicting the expected harvest amount will be described. The worker terminal 1 has a target field selection unit 21, a recognition result totaling unit 22, and a recognition result transmission unit 23, in addition to the above-mentioned photographing unit 11 and the like. The target field selection unit 21 selects a target field (corresponding area) for collecting data for predicting the expected harvest amount before the work. The target field selection unit 21 selects the target field by referring to the field information from the DB 250 of the server 2. The server 2 may have a target field selection unit 21. The DB 250 is a DB for collecting data for a plurality of farmers and a plurality of fields, and stores field information including a field list.
[0043]
The photographing unit 11 and the object identification unit 12 have the same configuration as described above. The object identification unit 12 cooperates with the AI function 20 based on an input image or the like, and receives a recognition result which is an output from the AI function 20. This recognition result includes information such as the position, quantity, and maturity of the object in the image. The recognition result totaling unit 22 receives the recognition result from the object identification unit 12, and totals the quantity of the object in the recognition result as the quantity according to the maturity level. The recognition result transmission unit 23 receives the aggregation result information from the recognition result aggregation unit 22 and the target field information from the target field selection unit 21, creates transmission information combining them, and transmits the transmission information to the prediction system 202 of the server 2. ..
[0044]
This transmitted information is information for prediction, for example, "January 5, 2019, field A, maturity 1:10, maturity 2: 100", the current date and time, the target field, recognition and It contains information on the quantity (recognition amount) of the object by the measured maturity. The prediction system 202 receives this transmission information, accumulates it in the DB, and predicts the expected harvest amount in the future date and time in the target field. As the predicted time unit, for example, one day, one week, January, etc. can be applied. The forecast result information includes the future date and time, the target field, and the expected harvest amount by maturity, such as "January 12, 2019, field A, maturity 1:15, maturity 2:150". Etc. are included. The prediction system 202 predicts the expected harvest amount in the future date and time by using, for example, the quantity of transmitted information in the time series history, the weather at the present time, the weather forecast, and the like. The mechanism and logic of this prediction processing are not particularly limited.
[0045]
The prediction system 202 provides information on the prediction result to the host system, the instructor W2, the worker W1, or the like. For example, a person such as JA or a higher-level system can easily obtain forecast information such as an estimated harvest amount. As a result, JA and the like can reduce the conventional field survey man-hours and improve the prediction accuracy. JA, etc. can grasp the harvesting and shipping conditions (actual results, forecasts, etc.) of each agricultural product of each farmer with as high accuracy as possible.
[0046]
[Pest Discrimination Support Function]
FIG. 6 shows an outline of processing of the pest discrimination support function 43 in the configuration of cooperation between the worker terminal 1 and the server 2. The worker terminal 1 has a target field selection unit 21 and a recognition result transmission unit 23 in addition to the above-mentioned photographing unit 11 and the like. In this configuration, the object identification unit 12 and the AI function 20 have a recognition function regarding the state of pests in addition to the above-mentioned functions. The object identification unit 12 receives the image from the photographing unit 11 and information such as the date and time, the position, and the direction. The object identification unit 12 inputs such information into the AI function 20. The AI function 20 recognizes the presence / absence and type of pests of the object in addition to the position of the object, and responds the recognition result to the object identification unit 12. The recognition result includes information such as the date and time, the target field, the position (position of the worker W1 and the worker terminal 1), the presence or absence and type of the pest, and the position of the pest associated with the position of the object in the image. include. It should be noted that pests do not always occur only in the fruits and vegetables, but can also occur in the stems, leaves and the like. The recognition result also includes the information in that case. If the crop is too ripe and rots, it will be removed. Such a state can also be recognized by the AI function 20.
[0047]
The recognition result transmission unit 23 receives the recognition result from the object identification unit 12, and predetermined transmission including the recognition result information regarding the target field, the position (position of the worker W1 and the worker terminal 1), the state of the pest, and the like. Information is created and transmitted to the growth management system 203 of the server 2. The growth management system 203 of the server 2 creates coping support information for coping with the state of the pest based on the transmission information, and transmits it to the instructor W2 or the higher system. The growth management system 203 of the server 2 may respond to the worker terminal 1 with the coping support information. The countermeasure is not limited to spraying pesticides, but may be fertilization, removal, or the like. The coping support information includes pesticide spraying instruction information. The pesticide spraying instruction information includes information for designating the pesticide spraying location such as the position or area where the pesticide should be sprayed in the target field, and information for specifying the type and amount of pesticide to be sprayed.
[0048]
In this system, a person such as the instructor W2 or JA or a worker W1 can grasp the points where measures such as spraying pesticides are necessary in the field, and can prevent omission of measures. The instructor W2 or the like can suppress the pesticide spraying location and the spraying amount, can efficiently and at low cost, and can improve the quality of agricultural products. The coping information may include instructions for thinning out crops. For example, suppose that a stem has multiple objects bearing fruit, but some of them have pests. The growth management system 203 considers such a state, determines the crops to be thinned out (corresponding removal), and gives them as coping information.
[0049]
[Work Support Function-Modified Example]
FIG. 7 shows the configuration of a modified example relating to the work support function 41 of FIG. In this modification, the target selection unit 13 is provided on the server 2. The AI function 20 of the server 2 inputs an image or the like from the object recognition unit 21, performs recognition processing, and gives the recognition result to the target selection unit 13. The target selection unit 13 selects all or part of the recognition result from the AI function 20 according to the work instruction information from the instruction system 201, and transmits the selection result to the object recognition unit 12. A similar effect can be obtained with such a configuration. It is also possible to set the user for the selection in the target selection unit 13.
[0050]
[Work support information output method] In
the field work support system of the first embodiment, the smart device, which is the worker terminal 1, displays the work support information on the display surface 5 based on the recognition result of the AI function 20 described above. Can be transmitted to the worker W1. In this case, the system devises a display of work support so that the worker W1 can easily understand it and the worker W1 can easily judge the harvest or the like. The display control unit 14 of the worker terminal 1 controls the display mode when displaying an image representing the harvested object or the like on the display surface 5. For example, the worker terminal 1 can highlight only the harvested object by making a selection using the target selection unit 13 in the range of the display surface 5 corresponding to the field of view of the worker W1. The worker terminal 1 or the server 2 uses the target selection unit 13 to select (filtering, etc.) only a part of the work support information that is an output candidate. Further, when the harvesting object exists in the vicinity of the worker W1 and the worker terminal 1 (for example, within a predetermined distance range), the worker terminal 1 is subjected to image display, audio output, vibration, light output, or the like. , Notification and work guidance can be performed.
[0051]
Worker W1 works on agricultural products, cultivated soil, utensils, etc. in fields such as greenhouses. Examples of the work operation include the operation of holding and harvesting crops with both hands. Therefore, the worker W1 basically spends a lot of time using both hands, and does not want to let go of his hands for other work (for example, work of operating IT equipment). This system displays a work support image on the display surface 5 of the mobile terminal 1A, for example. However, in this case, it is necessary for the worker W1 to hold the mobile terminal 1A in his hand and look at the display surface 5. Therefore, the output method in this system is not limited to the image display on the physical display surface 5, and other means are also provided. This system can superimpose and display work support information on the field of view of the worker W1 by using a mechanism such as AR of the wearable terminal 1B. In that case, it is easy for the worker W1 to free his / her hand, and the worker W1 is easy to work. Further, this system is not limited to such a display means, and can transmit work support information by using voice, vibration, light, or the like even when the worker W1 is not free.
[0052]
[Maturity] For the
crops to be the work object 3, the maturity, grade, size (actual size), etc. may be specified for each type. Grades are, in other words, quality classifications. FIG. 8 shows an example of a regulation regarding maturity when the crop is tomato. In this example, there are 1 to 6 maturity levels. This maturity is highest at 1 and lowest at 6. Further, as an example of the maturity threshold according to the work instruction, a case where the maturity is 3 or more is shown. In this case, since the maturity levels 1 to 3 are equal to or higher than the threshold value, they are, for example, harvested objects, and since the maturity levels 4 to 6 are less than the threshold value, they are non-harvested objects. Such provisions for maturity (and grades described below) can be set differently depending on the type of crop and the region (for example, prefecture). As the maturity to be harvested, any maturity of 1 to 6 may be the target of harvest. The maturity to be harvested is determined, for example, in consideration of the transportation distance, use, needs, and the like. Items with a maturity of 5 or 6 may also be harvested.
[0053]
[Color swatch]
FIG. 9 shows an image example of a color swatch (in other words, a standard of maturity) when the crop is tomato. In the image example of this color sample, six kinds of tomato actual image examples having maturity = 1 to 6 are described in parallel on the paper so that they can be compared. The AI function 20 performs machine learning for recognition by inputting an image of such a color sample (in other words, teacher data) in advance. This allows the learning model of AI function 20 to recognize the maturity of tomatoes. In the target industry, color samples and pattern samples (standards such as shape) of objects such as agricultural products are prepared in advance, and they can be used for machine learning.
[0054]
[Grade]
FIG. 10 shows an example of the provision regarding the grade when the crop is a cucumber. There are similar regulations regarding the size (actual size). In this example, there are A, B, and C as the grades specified in the sample. Grade A corresponds to an individual whose shape is close to a straight line. Grade B corresponds to an individual having a certain degree of bending in shape (grade B1) or having a shoulder drop at the tip (grade B2). Grade C corresponds to the case where the shape is more distorted than grade B.
[0055]
The AI function 20 inputs an image of a sample of an individual for each grade in advance, and performs machine learning regarding the grade. As a result, the learning model of the AI function 20 can recognize the cucumber grade. Similarly, the AI function 20 can recognize the actual size of the crop based on machine learning. Further, the AI function 20 may calculate the actual size of the object using the size of the object in the image and the distance detected by the distance measuring sensor.
[0056]
[AI function] The
AI function 20 is supplemented with reference to FIG. 11 and the like. FIG. 11 shows an explanatory diagram of the AI function 20. The worker terminal 1 acquires an image including the agricultural product of the work object 3 as a subject by the photographing unit 11. Image 111 shows an example of an input image, and for example, three tomatoes and the like are shown. The image 111 is accompanied by information such as ID, date and time, position, and direction. The worker terminal 1 inputs the input image 111 to the object recognition unit 22. The object recognition unit 22 transmits data such as the input image 111 to the AI function 20. The object recognition unit 22 may include the AI function 20. The object recognition unit 21 or the AI function 20 inputs data including an image, performs recognition processing on the object, and outputs data including the recognition result.
[0057]
The recognition result 112 in the output data includes information on the ID, type, maturity, position or region of the object in the image 111. The type is an estimated value of the classification of agricultural products such as tomatoes and cucumbers. The recognition result 112 in FIG. 11 shows, for example, an example of information regarding the individual 111a. The ID of the object corresponding to the individual 111a is 001, the type is A (tomato), and the maturity degree is 3. The position of the object is L1. The position L1 of the object is represented by position coordinates or the like. The area corresponding to the position of the object may be represented by a rectangular or circular area. When the region is a circular region, it may be represented by center point coordinates (cx, cy) and radius (r), and when it is an elliptical region, it may be represented by ellipticity or the like. When the area is a rectangular area, it may be represented by, for example, the coordinates {(x1, y1), (x2, y2)} of the upper left and lower right points, the center point coordinates, the width, the height, and the like. It may be represented by.
[0058]
In the case of pest discrimination support, the output recognition result includes information such as the position and type of the pest. The output data is not limited to the maturity level, and may include information such as the grade and actual size of the crop. The maturity and grade are obtained as a recognition result based on the above-mentioned sample.
[0059]
A detailed example of the recognition process of the object recognition unit 12 and the AI function 20 is as follows. The AI function 20 inputs an image (still image, moving image, streaming image of a camera, etc.). When the format of the input image data is a moving image, the AI function 20 sequentially cuts out one image frame from the moving image and inputs it as an image frame (that is, a still image) to be recognized. The AI function 20 recognizes the type, position, maturity, etc. of an object in the image of the input image frame, and outputs the recognition processing result.
[0060]
FIG. 12 shows a specific configuration example of the output of the recognition result of the AI function 20. The input image frame corresponding to the image 111 in FIG. 11 has, for example, 1280 pixels in the horizontal direction (x) and 720 pixels in the vertical direction (y). The position coordinates (x, y) = (0,0) are set with the upper left pixel of the image frame as the origin. In the example of this image 111, as an object, for example, an individual of three tomatoes (fruits and vegetables) (indicated by the objects OB1, OB2, OB3) is shown. FIG. 12 shows, for example, the recognition result of the position regarding the object OB1. The position of the object OB1 is represented by a corresponding rectangular area. This rectangular area is represented by two points, the position coordinates (x1, y1) of the pixel p1 of the upper left vertex and the position coordinates (x2, y2) of the pixel p2 of the lower right vertex. For example, point p1 (x1, y1) = (274,70), point p2 (x2, y2) = (690,448). Further, the recognition result of the type of the object OB1 is "A (tomato)", and the recognition result of the maturity degree is "3". When the rectangular area is defined by the center point, the width, and the height, for example, for the object OB1, the position coordinates of the center point = (482,259), the width = 416, and the height = 378.
[0061]
The object recognition unit 12 can recognize a plurality of objects in the image in the same manner, and can collectively output recognition result information of the plurality of objects. The format of the recognition result output can be, for example, Table 113. The table 113 stores the recognition result information of the object for each row, and has the object ID, type, maturity, position coordinates {x1, y1, x2, y2} and the like as columns.
[0062]
In the case of harvesting work support, the system receives work instructions from the instruction system 201, including the designation of the maturity of the harvested object. In this case, the target selection unit 13 selects a part of the information as the information of the harvest target from the recognition result according to the designation of the maturity level. This narrows down the data to be displayed. For example, when "maturity is 3 or more" is specified in the work instruction and a part of the recognition result data in the table 113 is selected by the target selection unit 13, the selection result is as shown in the table 114. As the selection result of the table 114, only the data of the tomato (object ID = 001) in the first row of the table 113 is extracted.
[0063]
[Work support display (1)]
An output example when the output method of work support information is display will be described below. FIG. 13 shows an example of displaying the harvesting work support information on the display surface 5 of the worker terminal 1. This example corresponds to the example of image 111 in FIG. Image 131 is an image taken at one location in the field, and shows a plurality of tomatoes as an example of agricultural products (including not only fruits and vegetables but also stems and leaves). The worker terminal 1 displays an image 132 representing a harvesting object with respect to the object in the image 131. The example of this image 132 is a red frame image surrounding the region of the object OB1. The display control unit 14 of the worker terminal 1 determines the color, shape, size, and border of the frame image (for example, image 132) according to the maturity (for example, maturity 3) of the object (for example, the object OB1). Control so that the thickness etc. are different. Further, the worker terminal 1 controls the frame image so that the shape of the frame image is different, such as a rectangle, a circle (including an ellipse), a triangle, etc., according to the shape of the object. The shape of the frame image is preferably matched to the contour of the object as much as possible, but is not limited to this, and may be a rectangle or a circle including the area of the object. In this example, the shape of the frame image is circular according to the type of the object being tomato. In image 132, the frame image is not displayed for the tomatoes, which are non-harvest objects. The worker W1 pays attention to the image 132 showing the object to be harvested, and can easily harvest the corresponding individual.
[0064]
[Work support display (2)]
FIG. 14 shows another display example when displaying an image of harvest work support information on the display surface 5. In this example, image 141 shows a plurality of individual tomatoes. The worker terminal 1 superimposes and displays a frame image as an image representing the object of each individual on the display surface 5 based on the information of the object of the recognition result (corresponding selection result). As a frame image, a solid circular frame image (for example, images g1 and g2) represents a maturity level of 1 and an object to be harvested, and is displayed by, for example, a thick red frame line. The dashed circular frame image (eg, image g3) represents a maturity of 2 and a non-harvest object. The dashed circular frame image (eg, images g4, g5) represents a maturity of 3 and a non-harvest object, and is represented by, for example, a thin yellow border. Further, this example shows a case where the maturity number is displayed together for each object, but this display may be omitted.
[0065]
The display content of the harvesting work support information can be changed according to the instruction or setting of the user (worker W2 or the like). In this example, the support display is performed for an object (tomato) having a maturity level of 3 or higher, and a frame image or the like is not displayed for an object having a maturity level of 4 or less. Further, in this example, it is a case where the maturity degrees 1, 2, and 3 are displayed separately, and different frame images are displayed according to the maturity level. Further, in this example, when "harvesting of maturity 1" is received as a harvesting work instruction from the worker W2, a frame image (images g1, g2) showing that the object of maturity 1 is a harvesting object. ) Is displayed. Further, in this example, an explanatory image indicating that the frame image (image g1 or the like) represents a harvesting object having a maturity of 1 is also displayed on the display surface 5.
[0066]
The worker W1 can easily recognize which object should be harvested by seeing such a display of the harvesting work support. When the worker W1 pays attention to the solid line frame image in the field of view, the frame image represents the harvesting object (or the harvesting instruction), so that the object can be easily harvested.
[0067]
[Work support display (3)]
FIG. 15 shows another display example of the harvest work support information on the display surface 5. Image 151 of FIG. 15 shows an example of displaying information on all objects in the recognition result (corresponding selection result). In this image 151, a plurality of individual tomatoes (for example, individuals T1, T2, T3, etc.) are shown. In the example of this image 151, the cultivated soil (ridges), passages, support rods, wires, tapes, covers, etc. are shown in addition to the crops. This example shows a case where an image is taken diagonally to the left from the positions of the worker W1 and the worker terminal 1 on the row-shaped passage. In this image, tomato fruits and stems are shown in the foreground, ridges and covers are shown in the lower part of the back, the next passage is shown in the back, and next to the back. The ridges and covers of the tomatoes are reflected.
[0068]
The solid circular frame image (for example, frame images G1 and G2) is an image showing a harvesting object, and in this example, represents an object having a maturity of 3 or more (maturity 1, 2, 3). The broken line circular frame image (for example, frame image G3) is an image showing a non-harvested object, and in this example, represents an object having a maturity of less than 3 (maturity 4, 5, 6). Further, this example shows a case where the thickness of the frame line of the frame image is changed according to the distance between the worker W1 and the object and displayed. The worker terminal 1 displays the object with a smaller distance and a thicker frame line. By seeing such an image of the harvesting work support, the worker W1 can easily recognize how much the object exists in the space of the field of view, the distribution state of the maturity, and the like.
[0069]
In this example, since all the information is displayed, the frame image is displayed on both the harvested object and the non-harvested object in the image 151. This frame image is displayed in different colors, for example, depending on whether or not it is a harvest target. For example, the harvest target is a red solid line frame image, and the non-harvest target is a yellow dashed frame image. The frame image may have different colors depending on the maturity of the object. For example, when there are 6 levels of maturity from 1 to 6, the color of the frame image may be set in association with each maturity, or the frame image may be set according to the definition of a predetermined range of maturity. Colors may be set. For example, when six kinds of colors are provided, red, orange, yellow, green, blue, and gray may be used. For example, when three kinds of colors corresponding to three kinds of ranges are provided, red, yellow, and green may be used. For example, using the maturity threshold, red is used as the first range in the case of maturity 1 and 2, yellow is used as the second range in the case of maturity 3 and 4, and maturity 5 and 6 are used. The third range may be green. Worker W1 harvests an individual with a red frame image. Worker W1 can recognize that the individual with the yellow frame image should not be harvested due to lack of maturity.
[0070]
The size of the frame of the frame image is displayed according to the size of the object (vegetables) in the image. The frame image having a small size in the image is a fruit having a small growth size or a fruit located far from the position of the worker W1. The worker W1 can perform the harvesting work by paying attention to the object shown in the frame image having the largest size in order. The worker terminal 1 may make a selection so as not to display an object or a frame image whose size in the image is less than the threshold value. For example, in image 151, the plurality of fruits shown in the back are fruits facing the adjacent passage, and cannot be immediately harvested from the passage where the worker is currently present. In this case, the frame image may not be displayed for the fruit (corresponding object) even if it is a harvest target. As a result, the amount of information displayed can be reduced, and the cognitive load of the worker W1 can be reduced.
[0071]
As another display control example, the color of the frame image may be matched to the color of each maturity based on the color sample.
[0072]
[Work support display (4)]
FIG. 16 shows another display example of the harvest work support information on the display surface 5. This image 161 shows an example of displaying the information of a part of the objects selected by the object selection unit 12 in the recognition result. In this image 161, a solid circular red frame image (for example, frame images G1 and G2) is displayed only on the harvested object. The worker W1 can easily perform the harvesting work by paying attention to the harvesting object shown in this frame image.
[0073]
As will be described later, when the object has a pest, a predetermined frame image showing the state of the pest is also displayed. For example, when the individual Tx1 has a pest, a frame image Gx1 showing the state of the pest is displayed. The frame image Gx1 is, for example, purple and is a frame line of a one-dot chain line.
[0074]
Further, the worker terminal 1 may change the aspect of the frame image of the object in the image according to the distance from the positions of the worker W1 and the worker terminal 1 to the position of the object. The worker terminal 1 can detect the distance from the viewpoint of the worker W1 and the position of the worker terminal 1 to the position of the object by using, for example, an image analysis process or a distance measuring sensor. The worker terminal 1 controls the color, shape, size, etc. of the frame image of the object by using the distance information.
[0075]
In the examples of the image 151 and the image 161 the thickness of the frame line of the frame image is made different according to the distance. For example, in the worker terminal 1, the smaller the distance, that is, the closer the object is to the worker W1, the thicker the frame line becomes and the more conspicuous it is. As a result, the worker W1 can make a harvest or a judgment by paying attention to the object closest to him / her in order.
[0076]
[Work support display (5)]
FIG. 17 shows another display example of the harvest work support information on the display surface 5. In this image 171 the image showing the harvested object is not a frame image but an image connecting an arrow line and a number. In the image 171 at a position some distance from the position of the object and not overlapping with other objects, the image of the number is displayed. The position where the image of the number is displayed may be the edge of the display surface 5. The numbered image shows, for example, a case where a circular frame image is used, and the color of the circular frame line and the thickness of the line may be changed, for example, according to the degree of maturity. Further, this number is not limited to the maturity level, and may be a number indicating the order of harvesting. For example, objects in the order of objects having a short distance from the worker W1 may be numbered in the order of 1, 2, ....
[0077]
[Work support display (6)]
FIG. 18 shows another display example of the harvest work support information on the display surface 5. The worker terminal 1 uses the line-of-sight information of the line-of-sight detection result of the line-of-sight detection sensor to cover only a part of the range (for example, range 182) centered on the point ahead in the line-of-sight direction (for example, point E1). Display information about. In this example, in the range 182, each frame image showing the harvest target and the non-harvest target is displayed. An image representing the range 182 may or may not be displayed. The range 182 is not limited to a rectangle, but an ellipse or the like is also possible.
[0078]
The first example in the case of performing this display control is as follows. The worker terminal 1 detects the line-of-sight direction of the worker W1 using the line-of-sight detection sensor. The worker terminal 1 calculates the position of a point in the image ahead in the detected line-of-sight direction. The worker terminal 1 sets a range of a predetermined size around the position of the point. The worker terminal 1 sets the range (for example, the range 182) as a detection area related to the recognition process. The object recognition unit 12 and the AI function 20 perform recognition processing on the detection area of the image data. In this case, since the data to be processed is reduced, the calculation amount of the recognition process can be reduced.
[0079]
The second example is as follows. The worker terminal 1 sets a range of a predetermined size (for example, a range 182) around the position of a point in the image ahead in the detected line-of-sight direction. The object recognition unit 12 and the AI function 20 perform recognition processing for the entire area of the image data. The target selection unit 13 of the worker terminal 1 displays only a part of the information corresponding to the range 182 from the recognition result information.
[0080] [0080]
[In the case of shipping work]
FIG. 19 shows an explanatory diagram regarding shipping work support. Image 191 of FIG. 19 shows an example of an image in which a plurality of individual cucumbers harvested by the worker W1 are arranged on a table surface during a shipping operation. In this example, the cucumber individuals K1 to K5 are contained in this image 191. At the time of shipping work, the worker W1 sorts these individuals from the viewpoint of grade, actual size, etc., and ships them in a box or package for each selected population. The worker W1 particularly uses the shipping work support function in the shipping work including this sorting. The worker terminal 1 inputs such an image 191 and recognizes it by the object recognition unit 12 and the AI function 20. The AI function 20 takes this image 191 as an input and outputs information on the grade and the actual size of each object as a recognition result. The display control unit 14 of the worker terminal 1 displays grade and actual size information for each object as shipping work support information in the image based on the recognition result (corresponding selection result) of the object recognition unit 12. do.
[0081]
FIG. 20 shows an example in which shipping work support information is superimposed and displayed on the image 191 of FIG. 19 on the display surface 5. In this example, as shipping work support information, an image 192 showing the grade and the actual size is displayed at a position in the vicinity of the object in the image 191, for example, a lower position. This image 192 is, for example, a character image. In this example, for the five individuals K1 to K5, the character images are "S", "M", "L", "B", and "C" in order from the left. In this example, for each grade A product, a character image showing the actual size is displayed. For example, S stands for small, M stands for middle, and L stands for large. These indications can be selected as grade only, actual size only, or both. By seeing such shipping work support information, the worker W1 can easily recognize the grade and the actual size of each individual, and can easily perform the shipping work including the selection of the individual for each grade and the actual size.
[0082]
FIG. 21 shows another display example relating to shipping work support. In this example, the image of the shipping work support information shows a case where it is a frame image for each individual. For example, in the individuals K1 to K3, a rectangular red frame image is displayed for each individual. This frame image represents grade A. Similar to the explanation of the work instruction at the time of the harvesting work support described above, the work support output according to the shipping work instruction from the instructor W2 is also possible at the time of shipping work support. For example, the instructor W2 instructs the shipment (or selection) of the grade A as a shipping work instruction. The target selection unit 13 of the worker terminal 1 selects a part of the information corresponding to the grade A from the recognition result according to the work instruction, and displays the corresponding shipping work support information on the display surface 5. In this example, a frame image showing a shipping target (or a shipping instruction) is displayed for the individuals K1 to K3 corresponding to the grade A. An explanatory image indicating that the frame image is a shipping target (grade A) is also displayed on the display surface 5.
[0083]
As another display control example, when individuals of a plurality of objects to be harvested or shipped are adjacent to each other in the image, a frame image or the like in which the plurality of objects are grouped into one is displayed. It may be displayed.
[0084]
At the time of shipping work, the harvesting record or the shipping record can be counted based on the recognition result in which the image as shown in FIG. 19 is input. The grasped performance information can also be used in higher-level systems and the like.
[0085]
[Work support-voice] The
following is an example of output when the output method of work support information is voice. Based on the information of the selection result, the worker terminal 1 outputs at least a predetermined voice (for example, "ping-pong") indicating that there is a harvesting object in the image. Further, the worker terminal 1 may directly output voices such as "it is a harvest target" and "there is a harvest target" by using the voice synthesis function. Further, the worker terminal 1 may control the output sound to be different depending on the position of the harvesting object in the image or the direction or distance from the worker W1. For example, it is roughly divided into several areas near the center of the image and on the left side, right side, upper side, lower side, and the like. The worker terminal 1 makes a sound according to which area in the image the object is shown (it corresponds to the relationship between the position and direction of the worker W1 and the worker terminal 1 and the object). You may change it. Further, for example, when the position of the object in the image approaches the center, it may be output as "ping pong" or the like, and when it is far from the center, it may be output as "boo" or the like. Alternatively, a control for changing the volume may be used, such as raising the volume when approaching an object. Further, the worker terminal 1 may notify the position of the harvested object or the like according to the positional relationship between the worker W1 and the object. For example, it may be a voice such as "There is an object to be harvested on the right side".
[0086]
Further, the worker terminal 1 may be controlled so that the audio output from the speaker 82 (particularly a multi-speaker or a stereo speaker including a plurality of speakers) is different depending on the direction of the object as seen from the worker W1. .. For example, when the object is on the right side of the worker W1, the sound is heard from the right speaker of the multi-speaker, and when the object is on the left side, the sound is heard from the left speaker.
[0087]
When the output is only voice and there is no display, as a control example, the crop in the direction of the camera 6 of the worker terminal 1 (generally corresponding to the direction in which the head of the worker W1 is facing). A voice indicating whether or not the crop is to be harvested may be output. Alternatively, the worker terminal 1 determines from the image of the camera 6 that the worker W1 reaches for the object, picks up the object, or the worker W1 approaches the object. A judgment may be made, and at that time, a voice indicating the harvest target may be output.
[0088]
[Work support-vibration, light] The
following are examples of output when the output method of work support information is vibration or light. Based on the information of the selection result, the worker terminal 1 outputs at least a predetermined vibration or a predetermined light indicating that there is a harvesting object in the image. The worker terminal 1 may be controlled so that the type and intensity of vibration, the type and intensity of light emission, and the like are different depending on the position or orientation of the harvested object or the distance from the worker. The worker terminal 1 may change the state of vibration or the state of light emission depending on whether the worker W1 approaches or moves away from the object, for example. The type of vibration or light emission may be defined by, for example, an on / off pattern. In the case of light emission, it can be distinguished by, for example, the duration of light emission, blinking, the amount of light, and the like. When the field is dark, it is also effective to transmit by light.
[0089]
The display, voice, vibration, light and other output methods may be used in combination. In addition, when the field is equipped with a speaker device or a light emitting device, by linking to the device from the worker terminal 1, voice, vibration, light, etc. are output from the device instead of the worker terminal 1. You may. As the light emitting device, a laser pointer device or the like may be used. A laser pointer device may be installed in the field. The light emitting device may emit light such as a laser beam toward, for example, a harvesting object so as to point to the object. The worker W1 can recognize the harvested object or the like from the light.
[0090]
[Field]
FIG. 22 shows the configuration of a bird's-eye view map of a certain field (field A). Regarding the above-mentioned pest discrimination support function 43, the worker terminal 1 may display an image showing an area where pests are present or an image showing an area to be sprayed with pesticides on the display surface 5. .. In particular, the worker terminal 1 displays a simple map of the field as shown in FIG. 22 on the display surface 5, and displays an image showing an area where pests are present and an area where pesticides should be sprayed on this map. You may. Region 221 shows an example of a region of a pesticide spraying location corresponding to the region of the object in which the pest was detected. The information in this area 221 is included in the coping support information of FIG. 6 described above.
[0091]
Further, the positions w1, w2, and w3 indicate an example of the position where the worker W1 is located. The directions d1, d2, and d3 indicate an example of the shooting direction of the camera 6. Work support is possible in any position and direction.
[0092]
Further, as another method of using this system, first, the worker W1 takes a series of photographs of the agricultural products of each ridge along each passage of the field with the camera 6. As a result, the worker terminal 1 and the server 2 may collectively recognize and process the acquired data, and may create a map showing the state of the crop in the field from the result. On this map, the position and maturity of the harvested object, the state of pests, etc. are described.
[0093]
[Effects, etc.]
As described above, according to the on-site work support system of the first embodiment, the following effects can be obtained. First, regarding the harvesting and shipping work support function 41, it is possible to provide the worker W1 with specific and easy-to-understand harvesting instructions and shipping instructions. Therefore, even if the worker W1 is an inexperienced person, it is possible to easily make a judgment such as selection at the time of harvesting or shipping at a level close to the skill and experience of a skilled person. The worker W1 can easily perform the work according to the output of the work support, can reduce the work load, and can work efficiently. In addition, the work support output can also contribute to improving the skills of inexperienced people. The farmer can perform work such as harvesting even if the worker W1 is an inexperienced person, and can realize cost reduction and efficiency improvement of farming.
[0094]
With respect to the prediction support function 42, cost reduction, prediction accuracy improvement, and the like can be realized. In the conventional JA and the like, it is necessary to go to each farmer to hear about the prediction of the yield and the like, which requires a great deal of labor and cost. According to the prediction support function 42, it is possible to provide prediction information such as an estimated harvest amount based on data collection through communication and processing. Therefore, it is possible to reduce the labor and cost that have been conventionally required. According to the prediction support function 42, data can be collected in a short time, and prediction can be made based on grasping the growth situation of the object in the actual field, so that the prediction accuracy can be improved.
[0095]
With respect to the pest discrimination support function 43, cost reduction, work omission prevention, and the like can be realized. According to the pest discrimination support function 43, it is possible to efficiently provide coping support such as pesticide spraying by combining the recognition result and the position information regarding the state of the pest. Farmers may, in accordance with support, carry out the work of spraying pesticides in a limited designated area in the field with a designated spray amount. Farmers can prevent leaks of pesticide spraying, reduce pesticide costs and pesticide spraying man-hours, and realize cost reduction and efficiency improvement.
[0096]
The maturity and grade of crops that are suitable for harvesting and shipping differ depending on the transportation distance, the number of shipping days, and the like. According to this system, work support output according to the degree of maturity or the like can be performed according to the work instruction of the instructor W2. In the case of agriculture, the state of the field changes daily depending on the type and individual of the crop. This system can support work in consideration of the characteristics of such crops.
[0097]
In the case of the system of the prior art example as in Patent Document 1, it is necessary for the worker at the site to determine whether or not the crop should be harvested. Therefore, in the case of an inexperienced person, it may be difficult to judge the harvest or the like. According to the system of the first embodiment, even if the worker W1 is an inexperienced person, it can be easily determined which crop should be harvested.
[0098]
[Modification Example] The
following is also possible as a modification of the system of the first embodiment. The worker terminal 1 may use the AI function 20 to determine an object facing the passage where the worker W1 is currently present, and select work support information to be displayed on the display surface 5. As a result, a frame image or the like of an object that can be harvested in the passage where the worker W1 is currently present is displayed on the display surface 5. On the display surface 5, for example, a frame image about an object to be harvested in the adjacent passage is not displayed. Therefore, the worker W1 can efficiently perform the harvesting work.
[0099]
FIG. 23 shows the configuration of a map of a bird's-eye view of a certain field in a modified example. In a certain field, camera devices 500 (for example, four) are installed at predetermined fixed positions. Further, each camera device 500 is also provided with a speaker device and the like (not shown). The worker terminal 1 of the worker W1 cooperates with each camera device 500 and each speaker device. Each camera device 500 captures a predetermined direction, acquires an image, and transmits the image to the worker terminal 1 or the server 2. The worker terminal 1 or the server 2 recognizes the object using the image in the same manner as described above, and outputs work support information based on the recognition result. For example, the harvesting work support information is displayed on the display surface 5 of the worker terminal 1 of the worker W1 at the position w1.
[0100]
The harvesting work support information in this case may be information such as notifying or guiding the position where the harvesting object is located to the position w1 where the worker W1 is located. The position indicated by the star mark indicates the position where the harvested object is located. The worker terminal 1 may display an image showing the position of each harvesting object on the map of such a field. As described above, for the output such as notification of the position of the harvested object at that time, each means such as display, voice, vibration, and light can be used. When a plurality of devices (for example, a plurality of speaker devices 500) are installed in the field, the positions of the plurality of devices are distinguished and the output device is controlled to work on the position of the object. It is possible to tell the person W1. As described above, it is possible that the photographing means (camera 6 or camera device 500) and the recognition means (AI function 20) are separated from each other.
[0101]
The camera device 500 may take a picture including the worker W1 as a subject. The worker terminal 1 may use the camera 6 or the camera device 500 to detect the crops in the image and the hands of the worker W1 to determine the operation such as harvesting and measure the results. For example, the worker W1 reaches for the object recognized in the image and harvests it. In that case, it can be estimated that the harvest was made when the object is not shown in the image.
[0102]
The system of the first embodiment is configured to include the server 2, but the system is not limited to this, and may be realized only by the worker terminal 1. The means for detecting the position of the worker W1 is not limited to the GPS receiver, and a beacon, an RFID tag, an indoor positioning system, or the like may be applied. As the camera for acquiring an image, a camera provided in a helmet, work clothes, or the like worn by the worker W1 may be used. The worker terminal 1 may be provided with functions such as dustproof, waterproof, heat resistant, and heat radiating for agricultural work. It is preferable that the image display and audio output have a universal design that does not depend on the language of each country. The AI function 20 may perform recognition processing in consideration of the light condition of the site (for example, weather, morning, day and night, etc.). Among the systems of the first embodiment, a system having only a part of the work support function 41, the prediction support function 42, or the pest discrimination support function 43 is also possible. It may be a form in which only some functions can be used according to the user setting.
[0103]
(Embodiment 2)
The field work support system according to the second embodiment of the present invention will be described with reference to FIGS. 24 to 31. The basic configuration of the second embodiment is the same as that of the first embodiment, and the components different from the first embodiment of the second embodiment will be described below. In the second embodiment, a function (may be described as a performance detection function) is added to the first embodiment. This performance detection function is a function that estimates and counts the number of objects at the time of harvesting and shipping based on image recognition, and grasps them as performance. In the second embodiment, the portion where the work support output is performed is the same as that in the first embodiment. In the following, this function will be described in the case of harvesting, but it can also be applied in the case of shipping.
[0104]
The prediction support function 42 and the prediction system 202 of FIG. 3 described above predict the expected harvest amount, etc., and in order to improve the accuracy of the prediction, grasp and use the actual amount of the harvest, for example, the number of harvests. Is valid. Therefore, in the second embodiment, as shown in FIG. 24, a performance detection unit 30 (particularly a harvest detection unit) corresponding to the performance detection function is added to the system of the first embodiment. The performance detection unit 30 performs operations such as harvesting by the operator W1 (FIG. 1) based on the image recognition result 301 from the object recognition unit 12 (FIG. 4 or FIG. 5) and the image 302 from the photographing unit 11. At this time, the number of objects to be harvested (for example, the number of harvested objects) is estimated and counted. The result detection unit 30 stores and outputs information including the number of harvests as a result of the count as the harvest record 306. The prediction support function 42 and the prediction support system 202 described above can perform prediction processing such as the expected harvest amount by using the harvest result 305 of the detection result by the result detection unit 30. Further, this system may output information such as the harvest record 306 grasped by using this function to the instructor W2 in FIG. 3 or the higher-level system, or is output by the worker terminal 1 (for example, on the display surface 5). The number of harvests may be displayed).
[0105]
As a result, the on-site work support system of the second embodiment can efficiently measure the quantity of the target object in the work of harvesting and shipping the crop with the above-mentioned work support output (for example, FIG. 13), and grasps it as a result. can. When the function of this system is used, the workload is smaller than the conventional method of measuring the quantity such as harvesting at the site, and the measurement and grasp can be performed with high accuracy. In the conventional field, for example, when grasping the harvest record, it is often the case that the measurement of the approximate quantity of the harvested product is limited to the consideration of the workload and the cost. Examples of the measuring method include roughly measuring the weight in a unit such as a box containing a plurality of crops together, and measuring the number of pieces in a unit such as a box. On the other hand, according to the function of this system, the number of harvests and the like can be automatically measured according to the operation such as harvesting according to the work support output. Even in the case where the prediction support function 42 and the prediction support system 202 are not provided, it is useful to grasp the actual results by this actual result detection function.
[0106]
[Result detection function]
FIG. 24 shows the configuration of the performance detection unit 30 as a characteristic part of the field work support system of the second embodiment. The performance detection unit 30 is mounted on at least one of the worker terminal 1 or the server 2 in FIG. The performance detection unit 30 may be implemented as a part of the prediction support function 42 and the prediction system 202 of FIGS. 3 and 5, for example, or may be implemented as a performance detection function and a performance detection system that cooperate independently with them. May be good. In this example, it is assumed that the performance detection unit 30 is realized by program processing or the like on the server 2. The actual result detection unit 30 includes a harvest detection unit in the case of harvest and a shipment detection unit in the case of shipment, but FIG. 24 shows the case of the harvest detection unit.
[0107]
The performance detection unit 30 inputs the recognition result 301 (recognition result information) output from the object recognition unit 12 of FIG. 4 or FIG. 5 and the image 302 output from the photographing unit 11. The image 302 and the recognition result 301 are information corresponding to the time points. The performance detection unit 30 may further input and use a work instruction 310 (work instruction information) including harvest target information output from the instruction system 201 of FIG. 4 or the target selection unit 13. The recognition result 301 includes information on individual objects in the image, as shown in FIG. 12 above. As described above, the work instruction information 310 includes harvest target information such as "the harvest target is tomatoes having a maturity level of 3 or higher", in other words, information for selecting and limiting the target. Further, the selection result information output from the object selection unit 13 (FIG. 4 or 7) may be used. In that case, as for the selection result information, the object has already been selected according to the harvesting work instruction and the like as shown in FIG. 12 and the like.
[0108]
The performance detection unit 30 of FIG. 24 has a harvest target detection unit 31, a worker detection unit 32, a distance calculation unit 33, a harvest determination unit 34, and a condition setting as more detailed processing units realized by program processing or the like. Including part 35. The performance detection unit 30 repeats the process in the same manner for each image at each time point in the time series (image output by the photographing unit 11 in FIG. 4).
[0109]
The outline of the processing flow by the performance detection unit 30 is as follows. First, in the first step, the harvest target detection unit 31 uses the information of the recognition result 301 and the work instruction 310 to determine whether or not the harvest target is recognized in the image, and if it is recognized, its harvest. Detect an object. The harvesting object here is the work object 3 to be harvested by the worker W1 which corresponds to the work instruction and the work support output. The number of harvested pieces is counted for each type of harvested object (for example, tomato). When there are a plurality of harvest objects in the image, the harvest target detection unit 31 detects each harvest target. The harvest target detection unit 31 outputs the harvest target information 303, which is the detection result information. The harvest target information 303 includes the ID of each harvest target and the position information in the image.
[0110]
Further, the harvest target detection unit 31 may narrow down the harvest target to be detected by using the harvest target information (or the selection result information from the target selection unit 13) of the work instruction 310 in addition to the recognition result 301. For example, when the harvest target information is specified as "tomato with maturity 3 or higher", the harvest target detection unit 31 detects the harvest target corresponding to the designation among the objects in the image. When narrowing down using work instruction information 310 or the like, the effect of further improving the accuracy and efficiency of actual result detection can be expected.
[0111]
On the other hand, in the second step, the worker detection unit 32 uses the input image 302 to create a body part such as a hand of the worker W1 in the image 302 at the same time point corresponding to the recognition result 301 of the first step. Whether or not (described as a worker object) is included, and if it is included, the worker object is detected. Similar to the AI function 20 (FIG. 4), the worker detection unit 32 detects the worker object based on image analysis and machine learning. The worker object to be detected is not limited to the hand or arm, but may be a work glove used for work such as harvesting, a tool such as scissors, or a machine. This worker object can be defined in advance by a program or the like constituting the performance detection unit 30. The worker detection unit 32 outputs the worker object information 304, which is the detection result. The worker object information 304 includes the ID of the worker object and the position information in the image. As a modification, the AI function 20 (FIG. 4) is provided with a function of detecting a worker object from an image, and the recognition result 301 from the object recognition unit 12 includes the recognition result information of the worker object. You may do so. In this case, the worker detection unit 32 may detect the worker object based on the input recognition result 301.
[0112]
When the harvested object is detected in the first step and the worker object is detected in the second step, in the third step, the distance calculation unit 33 inputs the input harvested object information 303 and the worker object information 304. It is used to calculate the distance DST between the harvested object and the worker object in the image (Fig. 25, etc. described later). This distance DST is a parameter relating to the state of perspective and overlap between the harvested object and the worker object for determining the harvesting motion and the harvested object. The distance calculation unit 33 outputs the distance information 305 including the distance DST of the calculation result.
[0113]
In the fourth step, the harvested determination unit 34 determines whether or not the harvested object has been harvested by a worker object such as a hand (described as “harvested”) based on the input distance information 305 and predetermined conditions. Is determined, and the number of harvested pieces is counted according to the determination result. In this determination process, the harvested determination unit 34 uses conditions such as a threshold value preset by the condition setting unit 35. In this example, the thresholds include a distance threshold TD, a time threshold TT1 (first time threshold), and a time threshold TT2 (second time threshold). The threshold value of this condition may be changed according to the determination processing method, or may be set by the user.
[0114]
The fourth step includes the following steps A, B, C and D in more detail. First, in step A, the harvested determination unit 34 determines whether or not the distance DST between the harvested object and the worker object is equal to or less than the predetermined distance threshold value TD (DST ≦ TD). In other words, this judgment is a judgment as to whether or not a worker object such as a hand is sufficiently close to the harvested object.
[0115]
When the distance DST is equal to or less than the predetermined distance threshold value TD in step A, the harvested determination unit 34 is in a state where the distance DST is small (for example, the time T1 corresponding to the number of image frames) in step B. It is determined whether the time threshold value TT1 or more is continued (T1 ≧ TT1). This judgment corresponds to the judgment as to whether or not the harvested object is gripped or the like by a worker object such as a hand. In this judgment, a certain amount of time (TT1) is used so as to exclude the case of temporary overlap.
[0116]
In step B, when the predetermined time threshold TT1 or more is continued, and in step C, the harvested determination unit 34 determines whether or not the harvested object is no longer recognized in the image in the time series, and when it is not recognized. Determines whether the unrecognized time (for example, the time T2 corresponding to the number of image frames) continues for a predetermined time (second time threshold TT2) or more (T2 ≧ TT2). In this judgment, it is utilized that when the harvesting operation is performed, the harvested object is not recognized because it goes out of the image together with the worker object such as a hand.
[0117]
When the time during which the harvest target is not recognized in step C continues for a predetermined time or longer, the harvest determination unit 34 determines that the harvest target has been harvested (“harvest”). Then, in step D, the harvested determination unit 34 counts so as to increase the harvested number parameter value by 1. The harvested determination unit 34 stores and outputs the actual information 306 including the number of harvests up to the present, which is the determination result. After that, the process returns to the first step and is repeated in the same manner.
[0118]
As the period for which the actual number of harvested pieces is counted by the actual result detection unit 30, for example, a method in which the user performs an operation of designating on / off of the function at the start and end, or the like can be applied.
[0119]
As described above, the performance detection unit 30 determines the harvesting operation and counts the number of harvested pieces by determining the state of perspective, overlap, etc. between the harvested object and the worker object such as a hand in the image. In summary, in the fourth step of the above-mentioned determination processing method, the harvested determination unit 34 has a distance DST between the harvested object and the worker object of a predetermined value or less, and the state continues for a predetermined time or more. After that, when the state in which the harvested object is not recognized continues for a predetermined time or longer, it is determined to be harvested.
[0120]
As described above, according to the actual result detection function in the second embodiment, the quantity of the target object can be efficiently measured and grasped as the actual result in the work of harvesting and shipping the agricultural product accompanied by the work support output. Further, according to this actual result detection function, it is possible to collate and determine the difference between the information of the work instruction such as the harvest instruction and the work support output and the actual information such as the number of harvested pieces. Based on this collation, for example, it is possible to grasp whether or not the worker W1 actually performed the harvesting operation on the harvesting object designated as the work support output.
[0121]
[Image Example] In the
following, specific processing examples related to harvest determination will be shown using image examples as shown in FIGS. 25 to 29. FIG. 25 shows a specific example of the detection result of the harvested object and the worker object for the image at a certain time point. In the image, the harvesting object 251 and the worker object 252 are detected. Here, the position of the object to be harvested is defined as the center position of the object region as the position in the two-dimensional image (the horizontal axis is x and the vertical axis is y). The position Pt of the harvested object 251 is (tx, ty). Here, the area of the harvesting object 251 based on the recognition result 301 is shown by a rectangular frame. The worker object 252 is the case of the right hand of the worker W1, and is schematically shown as a transparent region for explanation. Similarly, the position of the worker object is defined as the center position of the object area as the position in the two-dimensional image. The position Pw of the worker object 252 is (wx, wy). Here, the area of the worker object 252 based on the image 302 is shown by a broken line frame. The image may not include the harvested object, or may include a plurality of harvested objects. In this example, one harvesting object 251 corresponding to "tomatoes having a maturity of 3 or more" is included and detected in the image.
[0122]
The worker W1 is trying to reach out and take the harvesting object 251 as a harvesting operation according to the work support output (for example, FIG. 13). Such a hand may be used as an example of a worker object to be detected. The worker detection unit 32 may detect the shape, color, and the like of the hand. Not limited to this, when the worker W1 wears work gloves on his / her hand, the work gloves may be the detection target. Further, when the worker W1 is using a tool such as scissors for work or is using a machine for work, the tool or machine may be the detection target. Further, for example, a predetermined marker may be attached to the work gloves or the like in advance, and the marker may be used as a detection target. The worker detection unit 32 may treat the marker as a worker object and detect it. Further, the marker may be a code such as a barcode. By using these means, effects such as making it easier to detect the worker object from the image can be expected.
[0123]
Further, there may be a case where a plurality of worker objects such as the left hand and the right hand of the worker W1 can be detected in one image. Even in that case, the system of the second embodiment may detect each worker object and apply the same processing to each worker object. For example, when both hands are included in the image, the performance detection unit 30 may make a determination using the hand having the closer distance DST to the harvested object.
[0124]
The distance DST (particularly DST1) in FIG. 25 is the distance between the position Pt (tx, ty) of the harvesting object 251 and the position Pw (wx, wy) of the worker object 252 in the image. The distance DST can be calculated, for example, by the following formula. DST = √ {(wx-tx) 2 + (wy-ty) 2 }
[0125]
FIG. 26 is an example of an image at a time point after the time point of the image of FIG. 25, and in particular, shows a case where the head and the field of view of the worker W1 are hardly moving. In this image, the position Pw of the harvesting object 251 has not changed, but the position Pw of the worker object 252 is close to the position Pt. The distance DST becomes smaller, and in particular, DST2 (DST2
Documents
Application Documents
| # |
Name |
Date |
| 1 |
202137048569-FER.pdf |
2022-06-15 |
| 1 |
202137048569-STATEMENT OF UNDERTAKING (FORM 3) [25-10-2021(online)].pdf |
2021-10-25 |
| 2 |
202137048569-FORM-26 [15-01-2022(online)].pdf |
2022-01-15 |
| 2 |
202137048569-FORM 1 [25-10-2021(online)].pdf |
2021-10-25 |
| 3 |
202137048569-Proof of Right [15-01-2022(online)].pdf |
2022-01-15 |
| 3 |
202137048569-FIGURE OF ABSTRACT [25-10-2021(online)].pdf |
2021-10-25 |
| 4 |
202137048569-DRAWINGS [25-10-2021(online)].pdf |
2021-10-25 |
| 4 |
202137048569-Information under section 8(2) [24-11-2021(online)].pdf |
2021-11-24 |
| 5 |
202137048569-FORM 18 [02-11-2021(online)].pdf |
2021-11-02 |
| 5 |
202137048569-DECLARATION OF INVENTORSHIP (FORM 5) [25-10-2021(online)].pdf |
2021-10-25 |
| 6 |
202137048569.pdf |
2021-10-30 |
| 6 |
202137048569-COMPLETE SPECIFICATION [25-10-2021(online)].pdf |
2021-10-25 |
| 7 |
202137048569-certified copy of translation [25-10-2021(online)].pdf |
2021-10-25 |
| 8 |
202137048569.pdf |
2021-10-30 |
| 8 |
202137048569-COMPLETE SPECIFICATION [25-10-2021(online)].pdf |
2021-10-25 |
| 9 |
202137048569-FORM 18 [02-11-2021(online)].pdf |
2021-11-02 |
| 9 |
202137048569-DECLARATION OF INVENTORSHIP (FORM 5) [25-10-2021(online)].pdf |
2021-10-25 |
| 10 |
202137048569-DRAWINGS [25-10-2021(online)].pdf |
2021-10-25 |
| 10 |
202137048569-Information under section 8(2) [24-11-2021(online)].pdf |
2021-11-24 |
| 11 |
202137048569-FIGURE OF ABSTRACT [25-10-2021(online)].pdf |
2021-10-25 |
| 11 |
202137048569-Proof of Right [15-01-2022(online)].pdf |
2022-01-15 |
| 12 |
202137048569-FORM-26 [15-01-2022(online)].pdf |
2022-01-15 |
| 12 |
202137048569-FORM 1 [25-10-2021(online)].pdf |
2021-10-25 |
| 13 |
202137048569-STATEMENT OF UNDERTAKING (FORM 3) [25-10-2021(online)].pdf |
2021-10-25 |
| 13 |
202137048569-FER.pdf |
2022-06-15 |
| 14 |
202137048569-AbandonedLetter.pdf |
2025-07-28 |
Search Strategy
| 1 |
202137048569E_15-06-2022.pdf |