Sign In to Follow Application
View All Documents & Correspondence

Method And System For Automatic Activity Update Generation

Abstract: ABSTRACT Method and system for automatically generating activity updates. The system collects at least one of an audio, video, text, and/or image data pertaining to an event. The system further processes the data and identifies participants, activities, emotions, location, and time at which each of the identified activities took place. The system further generates an event summary based on the collected data. Further, a social networking platform update is generated from the event summary, which can be posted on selected social media websites. The system also gives options for the user to review and edit the summary, if required, before posting on the social media websites.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
10 June 2015
Publication Number
52/2016
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
patent@bananaip.com
Parent Application
Patent Number
Legal Status
Grant Date
2023-05-25
Renewal Date

Applicants

SAMSUNG R&D Institute India - Bangalore Private Limited
# 2870, Orion Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post,Bangalore-560 037, India

Inventors

1. Siddhartha Mukherjee
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037
2. Haresh Surendra Kumar Chudgar
Samsung R&D Institute India – Bangalore,#2870, Bagmane Constellation Business Park, Doddanekundi, Marathahalli, Bangalore - 560037

Specification

CLIAMS:FORM 2
The Patent Act 1970
(39 of 1970)
&
The Patent Rules, 2005

COMPLETE SPECIFICATION
(SEE SECTION 10 AND RULE 13)

TITLE OF THE INVENTION

“Method and system for automatic activity update generation”

APPLICANTS:

Name Nationality Address
SAMSUNG R&D Institute India - Bangalore Private Limited India # 2870, Orion Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore-560 037, India

The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed:-

TECHNICAL FIELD
[001] The embodiments herein relate to social media and, more particularly, to automatically generate activity updates for social media.

BACKGROUND
[002] Social media websites have become a part of our life, and most of the users extensively use the social media sites to be in touch with dear and near. Various social media websites provide different features so as to attract the users. Live chat, options to update and share photos, videos, and news, are some of the popular features being offered by most of the social media websites currently available. Another important feature offered by present social media websites helps users to post updates pertaining to events. For example, if some friends are catching up, images and other data to represent that event can be uploaded and shared on the social networking websites. Most of the social media websites allow users to tag other users, while uploading images and/or while posting an update.
[003] However, in the present scenario, the user needs to manually collect data, and process/edit further to convert the data to a desired format, to post the status update on any social media website. In certain scenarios, the user (s) may not be able to capture/record certain important moments in the meeting. As a result, the status update becomes incomplete. Further, the user needs to spend a considerable amount of time collecting data, editing and preparing the status update. The user may further spend time tagging people who participated in the event, in the social website.

OBJECT OF INVENTION
[004] An object of the embodiments herein is to automatically generate a social update pertaining to an event.
[005] Another object of the embodiments herein is to automatically post the generated social update, on at least one social website.

SUMMARY
[006] In view of the foregoing, an embodiment herein provides a method for generating an automatic update for a social networking platform from a wearable device. Initially, at least one person in a field of view of the wearable device is identified by an activity update generation module. Further, at least one activity being performed by the at least one person is identified by the activity update generation module. Further, an event summary is generated for the at least one person and the at least one activity, by the activity update generation module, and a social networking platform update is generated for the generated event summary, by the activity update generation module.
[007] Embodiments further disclose a system for generating an automatic update for a social networking platform from a wearable device. The system identifies at least one person in a field of view of the wearable device, by an activity update generation module. The system further identifies at least one activity being performed by the at least one person, by the activity update generation module. Further, an event summary for the at least one person and the at least one activity is generated by the activity update generation module, and a social networking platform update for the generated event summary is generated by the activity update generation module.
[008] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings.

BRIEF DESCRIPTION OF THE FIGURES
[009] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0010] FIG. 1 illustrates a block diagram of auto updation system, as disclosed in the embodiments herein;
[0011] FIG. 2 illustrates a block diagram which shows components of auto update generation module, as disclosed in the embodiments herein;
[0012] FIG. 3 illustrates a flow diagram which shows steps involved in the process of generating an update for a social networking platform, using the auto updation system, as disclosed in the embodiments herein; and
[0013] FIG. 4 illustrates a flow diagram which shows steps involved in the process of generating an activity summary, using the auto updation system, as disclosed in the embodiments herein.

DETAILED DESCRIPTION OF EMBODIMENTS
[0014] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0015] The embodiments herein disclose auto update generation for a social networking platform from a wearable device, by automatically capturing and processing data. Referring now to the drawings, and more particularly to FIGS. 1 through 4, where similar reference characters denote corresponding features consistently throughout the figures, there are shown embodiments.
[0016] FIG. 1 illustrates a block diagram of auto updation system, as disclosed in the embodiments herein. The auto updation system 100 comprises of an auto update generation module 101, and a data collection module 102. The data collection module 102 further comprises of an audio input module 102.a, a video input module 102.b, a location identifier module 102.c, and a sensor module 102.d.
[0017] In a preferred embodiment, and for the ease of use, the data collection module 102 can be associated with a wearable gadget such as a wearable glass. For example, a camera associated with the wearable gadget can function as the video input module 102.b, a mic associated with the wearable gadget can function as the audio input module 102.a, and a GPS module associated with the wearable gadget may function as the location identifier module 102.c. The sensor module 102.d can host any type of sensor based on the type of data to be collected for the purpose generating the update. For example, if temperature needs to be measured, a temperature sensor can be used.
[0018] In an embodiment, the data collection module 102 can be configured to communicate and synchronize with data collection modules 102 in other wearable gadgets involved in the same event. In an embodiment, by synchronizing operations of data collection modules in multiple wearable gadgets, the data collected by all the wearable gadgets involved in the event can be stored in the same storage location. In another embodiment, by synchronizing operation of data collection modules 102 in all wearable gadgets involved in an event, data collected during that event can be associated with a unique Id that represents the event, such that the auto update generation module 101 can access all relevant data to generate the activity update.
[0019] The auto update generation module 101 can be configured to collect at least one input required to generate an event summary, which in turn can be used as the automatic update for a social networking platform. The auto update generation module 101 can be further configured to identify, by processing the collected inputs, at least one person who is in the field of view of the wearable gadget. In an embodiment, each session in which the wearable gadget collects input and the auto update generation module 101 generates the automatic update is termed as an ‘event’, and the person/people involved is termed as “participant”. The terms ‘participant’, ‘people’, and ‘person’ are used interchangeably throughout the specification.
[0020] The auto update generation module 101 can be further configured to identify at least one activity in the event. The auto update generation module 101 can be further configured to at least one emotion of at least one participant of the event. The auto update generation module 101 can be further configured to generate a summary of the event (i.e. event summary), based on the identified participants, activity, emotions, location, time stamp pertaining to each of the identified events, and supporting data, wherein the supporting data can be image, video clips, and audio files. The auto update generation module 101 can be further configured to generate a social networking platform update from the event summary, which in turn can be posted on at least one social networking platform. The auto update generation module 101 can be configured to generate the social networking platform update by converting the event summary to a format that is supported by selected social networking platform(s) in which the update needs to be posted. In various embodiments, the auto update generation module 101 can be configured to automatically post the update on selected social networking platform(s) or to give an option for the user to manually post the generated update on selected social media platforms. FIG. 2 illustrates a block diagram which shows components of auto update generation module, as disclosed in the embodiments herein. The auto update generation module 101 further comprises of an Input/Output (I/O) interface 201, a memory module 202, a contact check module 203, an activity identification module 204, an emotion identification module 205, a graph generation module 206, a text summary generation module 207, and an update generation module 206.
[0021] The I/O interface 201 can be configured to provide at least one channel with at least one suitable communication protocol, to allow the auto update generation module 101 to communicate with the audio input module 102.a, video input module 102.b, and the location identifier module 102.c. The I/O interface 201 can be further configured to provide at least one option for the user to view the activity update generated by the auto update generation module 101. The I/O interface 201 can be further configured to provide at least one option for the user to configure login credentials of at least one social media website of the user, so that the generated activity update can be automatically uploaded/posted to the social media website. The I/O interface 201 can be further configured to provide at least one option for the user to review and edit the activity update generated by the auto update generation module 101, if needed. The I/O interface 201 can be further configured to provide at least one option for the user to manually post the generated activity update to at least one selected social media website.
[0022] The memory module 202 can be a volatile or a non-volatile storage space, and can be configured to store all or selected information required for the activity update generation purpose. For example, the memory module 202 can maintain a contact database. The contact database comprises of all available information pertaining to people in at least one contact list of the user, which has been configured by user. For example, the user can synchronize contact list of his/her social media websites, email accounts, mobile phone and so on, with the contact database, and information such as, but not limited to name, contact number, email address, photo, and social media link get stored in the contact database. In various embodiments, the data in the contact database is updated automatically, or based on manual trigger from the user. The memory module 202 further stores an activity database which possesses various information related to different activities, such that the auto update generation module 101 can identify type of activity, and state of activity, based on the information stored in the activity database. For example, the activity database may possess learnt models of different activities containing pictures, audio, video or processed inertial sensor data that represent various stages of selected activities. In various embodiments, the data in the contact database is updated automatically, or based on manual trigger from the user. The memory module 202 further stores an emotion database which possesses all data required to differentiate between different emotions of the user. For example, the emotion database may store data in any or all of images/audio/video formats so that the auto update generation module 101 can identify type of emotion of a user, based on the information stored in the emotion database.
[0023] The contact check module 203 can be configured to collect at least one real-time input collected by the data collection module 102, and identify at least one participant of the event. For example, assume that the input is an image. The contact check module 203 then processes the image, and detects different faces in the image. Further, the contact check module 203 compares each of the detected faces with the contact database, and identifies the participants. The contact check module 203 can be configured to receive inputs from the participant’s social contact database to identify people not in its own contact list, wherein the social contact database is synchronized with the contact database, by the user. The contact check module 203 can be further configured to prompt the user to add a new user to the contact list, if any of the detected faces is not present in the contact database. In an embodiment, the contact check module 203 can use any suitable algorithm for the purpose of processing the input and comparing the data with the data in the contact database.
[0024] The activity identification module 204 can be configured to identify, by processing at least one input collected by the data collection module 102, at least one activity being performed by at least one participant identified by the contact check module 203. In order to identify the activity, the activity identification module 204 extracts important features from the captured image, video, audio and sensors and uses the trained models stored in the activity database, to identify the activity. The type of sensor can vary, based on the type of input required by the auto update generation module 101. If any match is found, corresponding activity is identified as the activity being performed by the at least one participant of the event. In an embodiment, the activity identification module 204 can use any suitable algorithm for the purpose of processing the input and comparing the data with the data in the activity database.
[0025] The emotion identification module 205 can be configured to identify, based on data stored in the emotion database, at least one emotion of at least one participant identified by the contact check module 203. The emotion identification module 205 extracts important features from the captured image, video and recorded audio and uses the trained models stored in the emotion database to identify the emotions of each of the participants. If any match is found, corresponding emotion is identified as the emotion of at least one participant of the event. In an embodiment, the emotion identification module 205 can use any suitable algorithm for the purpose of processing the input and comparing the data with the data in the emotion database.
[0026] The graph generation module 206 can be configured to generate an activity-emotion-time graph, based on the data received from the contact check module 203, activity identification module 204, and the emotion identification module 205. The graph generation module 206 picks different images and/or video and/or audio corresponding to different activities and arranges them as per the time stamp and annotates them with the detected activity identified by the activity identification module 204. In an embodiment, the graph generation module 206 picks different images/audio/video on a random manner. In another embodiment, the graph generation module 206 picks the images/videos/audio based on any specific sequence, as pre-configured by the user or any other. The graph generation module 206 then picks images and/or video and/or audio corresponding to different emotions as identified by the emotion identification module 205 and arranges them as per the time stamp. By using the selected images/audio/video, the graph generation module 206 generates an activity-time graph and an emotion-time graph. The activity-time graph depicts identified activities that are arranged according to the time stamp. The emotion-time graph depicts identified emotions that are arranged according to the time stamp. Further, the graph generation module 206 combines the activity-time graph and the emotion-time graph, according to timestamps, to form an activity-emotion-time graph.
[0027] The text summary generation module 207 can be configured to generate a text summary, based on the data received from the contact check module 203, activity identification module 204, and the emotion identification module 205. The text summary generation module 207 creates a textual summary by inferring the outputs of the said modules; for example, concatenating the contacts from the contact check module 203, identifying dominant emotions of each participant from the emotion identification module 205 and presenting the emotion as a word such as happy, tense anxious and so on, presenting the activity from the activity identification module 204 as a word(s) such as drinking coffee, laughing, learning craft, learning exercise and so on, and identifying the location from the location identifier module 102.c, and combining the textual data in the form of a summarized statement such as "Haresh feeling happy with Siddhartha and Vivek drinking coffee at Cafe Coffee Bar, Marathalli, Bangalore." or "Sreeja learning to walk, Siddhartha feeling happy and tense!".
[0028] The update generation module 208 can be configured to collect the outputs of the graph generation module 206, and the text summary generation module 207; and combine the text summary and the activity-emotion-time graph, to generate an activity summary. The update generation module 208 can be further configured to generate a social networking platform update from the activity summary, which in turn can be posted as the social update on at least one social page of the user, which has been linked with the update generation module 206. In an embodiment, the update generation module 208 posts the update, automatically, as soon as the update has been generated. In another embodiment, the update generation module 208 presents the update to the user for review, with required edit permissions, and posts the update, after receiving approval from the user.
[0029] FIG. 3 illustrates a flow diagram which shows steps involved in the process of generating an update for a social networking platform, using the auto updation system, as disclosed in the embodiments herein. The auto updation system 100 initially collects at least one real-time input using the data collection module 102. If more than one wearable gadget is used in the same event, working of data collection modules 102 of the wearable gadgets can be synchronized, so that the auto update generation module 101 can access data collected by all the data collection modules 102, from same or different storage locations, for the purpose of generating the activity update.
[0030] Further, by processing the input, the auto update generation module 101 in the auto updation system 100 identifies (302) at least one participant of the event, based on the data stored in the contact database. In a scenario in which the auto updation system 100 is associated with a wearable gadget is used by at least one participant, the participants of the event can be all people in the field of view of the wearable gadget.
[0031] The auto update generation module 101 further identifies (304) at least one activity being performed at the event, based on the data in the activity database. In an embodiment, the auto update generation module 101 identifies activities being performed, specific to the event. For example, assume that the event is a teacher teaching a student, and then the auto update generation module 101 identifies ‘teaching’ as the activity. In another embodiment, the auto update generation module 101 identifies activities being performed, specific to participants of the event. For example, in the above mentioned teacher-student scenario, the auto update generation module 101 identifies ‘teaching’ as the event specific to the teacher, and ‘learning’ as the event specific to the student.
[0032] The auto update generation module 101 further generates (306) an activity summary of the event, based on at least one of the parameters such as but not limited to the participants, and activity identified. The auto update generation module 101 further uses the audio/video/image data collected by the data collection module 102.
[0033] Further, from the activity summary, a social media platform update can be generated (308). In this process, the auto update generation module 101 can convert the activity summary to at least one suitable format that is supported and/or recognized by the social media platform in which the update needs to be posted. In an embodiment, the auto update generation module 101 automatically posts the generated social media platform update, to selected social media websites, as pre-configured by the user. In another embodiment, the auto update generation module 101 provides an option for the user to review the generated status update, and make changes if required, using suitable editing permissions and options. Further, the user may choose to post or not post on selected social media websites.
[0034] The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
[0035] FIG. 4 illustrates a flow diagram which shows steps involved in the process of generating an activity summary, using the auto updation system, as disclosed in the embodiments herein.
[0036] After identifying the particpants and the activity, the auto update generation module 101 further identifies (402) at least one emotional parameter of at least one identified participant of the event. For example, in the teacher-student scenario mentioned above, the teacher and student may be smiling/laughing at various time instances, or the teacher may be angry and shouting at a different time instance. The auto update generation module 101 further collects information pertaining to location where the event took place. The auto update generation module 101 also collects (404) time stamp information pertaining to the event and each of the identified activities. In an embodiment, the time stamp information is recorded by the data collection module 102, as and when the data is collected.
[0037] Further, an activity-emotion-time graph is generated (406) by the graph generation module 206, by picking different images and/or video and/or audio corresponding to different activities, arranging them as per the time stamp, and by annotating them with the activity identified by the activity identification module 204. In various embodiments, the data for generating the activity-emotion-time graph is picked up in a random order, or as per a sequence pre-defined by a user. In this process, an activity-time graph may be generated in which the activities are arranged according to the time stamp. Further, an emotion-time graph is generated in which identified emotions are arranged according to the time stamp. Further, the graph generation module 206 combines the activity-time graph and the emotion-time graph, according to timestamps, to form the activity-emotion-time graph.
[0038] Further, a text summary is generated (408) by the text summary generation module 207, based on the data received from the contact check module 203, activity identification module 204, and the emotion identification module 205, wherein the text summary represents the data pertaining to the event, in a textual format. The text summary generation module 207 creates a textual summary by inferring the outputs of the said modules; for example, concatenating the contacts from the contact check module 203, identifying dominant emotions of each participant from the emotion identification module 205 and presenting the emotion as a word such as happy, tense anxious and so on, presenting the activity from the activity identification module 204 as a word(s) such as drinking coffee, laughing, learning craft, learning exercise and so on, and identifying the location from the location identifier module 102.c, and combining the textual data in the form of a summarized statement. Examples of the text summary that can be generated by the text summary generation module are: "Haresh feeling happy with Siddhartha and Vivek drinking coffee at Cafe Coffee Bar, Marathalli, Bangalore.", "Sreeja learning to walk, Siddhartha feeling happy and tense!".
[0039] Further, a consolidated summary of the event is generated (410) by merging the activity-emotion-time graph, and the text summary. The consolidated summary serves as the activity summary. The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
Use-Case Scenario 1:
[0040] In the scenario 1, assume that 4 friends are chatting at a restaurant, having coffee. Now, one of them is wearing a smart glass, and the smart glass is equipped with at least one data collection module to collect at least one input such as, but not limited to an audio, video, text, image, location, and time. The smart glass, by processing at least one of the collected inputs, identifies participants of the event, and emotion of at least one of the participants. The smart glass further identifies emotional state of the participants of the event, and also identifies at least one activity (for example, drinking coffee) being performed by the participants. The smart glass further generates a summary of the event, based on the identified participants, activities, emotions, and other parameters such as time, location and so on. The user can be given option to prompt the smart glass to add media files as part of the summary. The summary can be updated as status on their social networking websites. Assuming that the participants of the event are Aishwarya, Kanmani, Rakesh and Som, the summary would be:
“Aishwarya, Kanmani, Rakesh and Som having a great time at Dominos near Yelachenahalli”
Use-Case Scenario 2:
[0041] In this scenario, mom teaches craft work to her kid. The smart glass that the mom is wearing captures different inputs such as but not limited to audio, video, location, time, and image. Further, by processing the inputs, the smart glass identifies the activity as teaching, and further identifies sub-activities and corresponding timestamp information. Here, “sub-activities” refer to steps/different stages of the activity (learning, coloring, finishing touches and so on). The smart glass may detect age of the kid by referring to data in any social networking profile of the mom, and the summary would be:
“Teaching craft to my 4 year old kid..Feeling awesome J ”
[0042] The smart glass may, before while presenting the summary to the user (in this scenario ‘mom’), may also allow the user to opt to add media files that represent various stages of the event, along with the corresponding time stamp. A sample time stamp is depicted below:

-------------?---------------?------------------?----------------?---------------?
Learning Colouring Finishing touches Happy Proud
10.00 AM 10.20 AM 10.55 AM 11.AM 11.05 AM
[0043] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 1 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0044] The embodiments disclosed herein specify a system for automatically generating activity updates for social media websites. The mechanism allows real time generation of activity updates, providing a system thereof. Therefore, it is understood that the scope of protection is extended to such a system and by extension, to a computer readable means having a message therein, said computer readable means containing a program code for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment using the system together with a software program written in, for ex. Very high speed integrated circuit Hardware Description Language (VHDL), another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including, for ex. any kind of a computer like a server or a personal computer, or the like, or any combination thereof, for ex. one processor and two FPGAs. The device may also include means which could be for ex. hardware means like an ASIC or a combination of hardware and software means, an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means are at least one hardware means or at least one hardware-cum-software means. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the embodiment may be implemented on different hardware devices, for ex. using a plurality of CPUs.
[0045] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.

CLAIMS
What is claimed is:
1. A method for generating an automatic update for a social networking platform from a wearable device, said method comprising:
identifying at least one person in a field of view of said wearable device, by an activity update generation module;
identifying at least one activity being performed by said at least one person, by said activity update generation module;
generating an event summary for said at least one person and said at least one activity, by said activity update generation module; and
generating a social networking platform update for said generated event summary, by said activity update generation module.
2. The method as claimed in claim 1, wherein said at least one person is identified based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with a contact database.
3. The method as claimed in claim 1, wherein said at least one activity is identified based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with an activity database.
4. The method as claimed in claim 1, wherein generating said event summary further comprises of:
identifying at least one emotional parameter of said at least one person, by said activity update generation module;
identifying location of said at least one person, by said activity update generation module;
identifying timestamp pertaining to occurrence of at least one of said identified activity and said emotional parameter, by said activity update generation module;
generating an activity-emotion-time graph, based on at least one of said activity, emotional parameter, location, and timestamp, by said activity update generation module;
generating a text summary, based on at least one of said activity, emotional parameter, location, and timestamp, by said activity update generation module; and
generating a social update, based on said activity-emotion-time graph, and said text summary, by said activity update generation module.
5. The method as claimed in claim 4, wherein said at least one emotional parameter of said at least one person is identified based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with an emotion database.
6. The method as claimed in claim 4, wherein said social update is at least one of a text, an audio, a video, an image, and a combination thereof.
7. A system for generating an automatic update for a social networking platform from a wearable device, said system configured for:
identifying at least one person in a field of view of said wearable device, by an activity update generation module;
identifying at least one activity being performed by said at least one person, by said activity update generation module;
generating an event summary for said at least one person and said at least one activity, by said activity update generation module; and
generating a social networking platform update for said generated event summary, by said activity update generation module.
8. The system as claimed in claim 7, wherein said activity update generation module is further configured to identify said at least one person by comparing at least one of an audio, video, and sensor input, pertaining to said event, with a contact database, by a contact check module of said activity update generation module.
9. The system as claimed in claim 7, wherein said activity update generation module is further configured to identify said at least one activity by comparing at least one of an audio, video, and sensor input, pertaining to said event, with an activity database, by an activity identification module of said activity update generation module.
10. The system as claimed in claim 7, wherein said activity update generation module is further configured to generate said event summary by:
identifying at least one emotional parameter of said at least one person, by an emotion identification module of said activity update generation module;
identifying location of said at least one person, based on at least one input from a location identifier module, by said activity update generation module;
identifying timestamp pertaining to occurrence of at least one of said identified activity and said emotional parameter, by said activity update generation module;
generating an activity-emotion-time graph, based on at least one of said activity, emotional parameter, location, and timestamp, by a graph generation module of said activity update generation module;
generating a text summary, based on at least one of said activity, emotional parameter, location, and timestamp, by a text summary generation module of said activity update generation module; and
generating a social update, based on said activity-emotion-time graph, and said text summary, by an update generation module of said activity update generation module.
11. The system as claimed in claim 10, wherein said emotion identification module is further configured to identify said at least one emotional parameter of said at least one person, based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with an emotion database.
12. The system as claimed in claim 10, wherein said update generation module is configured to generate said social update as at least one of a text, an audio, a video, an image, and a combination thereof.
Date: 9th June 2015 Signature:
Kalyan Chakravarthy
ABSTRACT

Method and system for automatically generating activity updates. The system collects at least one of an audio, video, text, and/or image data pertaining to an event. The system further processes the data and identifies participants, activities, emotions, location, and time at which each of the identified activities took place. The system further generates an event summary based on the collected data. Further, a social networking platform update is generated from the event summary, which can be posted on selected social media websites. The system also gives options for the user to review and edit the summary, if required, before posting on the social media websites.

FIG. 3

,TagSPECI:FORM 2
The Patent Act 1970
(39 of 1970)
&
The Patent Rules, 2005

COMPLETE SPECIFICATION
(SEE SECTION 10 AND RULE 13)

TITLE OF THE INVENTION

“Method and system for automatic activity update generation”

APPLICANTS:

Name Nationality Address
SAMSUNG R&D Institute India - Bangalore Private Limited India # 2870, Orion Building, Bagmane Constellation Business Park, Outer Ring Road, Doddanekundi Circle, Marathahalli Post, Bangalore-560 037, India

The following specification particularly describes and ascertains the nature of this invention and the manner in which it is to be performed:-

TECHNICAL FIELD
[001] The embodiments herein relate to social media and, more particularly, to automatically generate activity updates for social media.

BACKGROUND
[002] Social media websites have become a part of our life, and most of the users extensively use the social media sites to be in touch with dear and near. Various social media websites provide different features so as to attract the users. Live chat, options to update and share photos, videos, and news, are some of the popular features being offered by most of the social media websites currently available. Another important feature offered by present social media websites helps users to post updates pertaining to events. For example, if some friends are catching up, images and other data to represent that event can be uploaded and shared on the social networking websites. Most of the social media websites allow users to tag other users, while uploading images and/or while posting an update.
[003] However, in the present scenario, the user needs to manually collect data, and process/edit further to convert the data to a desired format, to post the status update on any social media website. In certain scenarios, the user (s) may not be able to capture/record certain important moments in the meeting. As a result, the status update becomes incomplete. Further, the user needs to spend a considerable amount of time collecting data, editing and preparing the status update. The user may further spend time tagging people who participated in the event, in the social website.

OBJECT OF INVENTION
[004] An object of the embodiments herein is to automatically generate a social update pertaining to an event.
[005] Another object of the embodiments herein is to automatically post the generated social update, on at least one social website.

SUMMARY
[006] In view of the foregoing, an embodiment herein provides a method for generating an automatic update for a social networking platform from a wearable device. Initially, at least one person in a field of view of the wearable device is identified by an activity update generation module. Further, at least one activity being performed by the at least one person is identified by the activity update generation module. Further, an event summary is generated for the at least one person and the at least one activity, by the activity update generation module, and a social networking platform update is generated for the generated event summary, by the activity update generation module.
[007] Embodiments further disclose a system for generating an automatic update for a social networking platform from a wearable device. The system identifies at least one person in a field of view of the wearable device, by an activity update generation module. The system further identifies at least one activity being performed by the at least one person, by the activity update generation module. Further, an event summary for the at least one person and the at least one activity is generated by the activity update generation module, and a social networking platform update for the generated event summary is generated by the activity update generation module.
[008] These and other aspects of the embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings.

BRIEF DESCRIPTION OF THE FIGURES
[009] The embodiments herein will be better understood from the following detailed description with reference to the drawings, in which:
[0010] FIG. 1 illustrates a block diagram of auto updation system, as disclosed in the embodiments herein;
[0011] FIG. 2 illustrates a block diagram which shows components of auto update generation module, as disclosed in the embodiments herein;
[0012] FIG. 3 illustrates a flow diagram which shows steps involved in the process of generating an update for a social networking platform, using the auto updation system, as disclosed in the embodiments herein; and
[0013] FIG. 4 illustrates a flow diagram which shows steps involved in the process of generating an activity summary, using the auto updation system, as disclosed in the embodiments herein.

DETAILED DESCRIPTION OF EMBODIMENTS
[0014] The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[0015] The embodiments herein disclose auto update generation for a social networking platform from a wearable device, by automatically capturing and processing data. Referring now to the drawings, and more particularly to FIGS. 1 through 4, where similar reference characters denote corresponding features consistently throughout the figures, there are shown embodiments.
[0016] FIG. 1 illustrates a block diagram of auto updation system, as disclosed in the embodiments herein. The auto updation system 100 comprises of an auto update generation module 101, and a data collection module 102. The data collection module 102 further comprises of an audio input module 102.a, a video input module 102.b, a location identifier module 102.c, and a sensor module 102.d.
[0017] In a preferred embodiment, and for the ease of use, the data collection module 102 can be associated with a wearable gadget such as a wearable glass. For example, a camera associated with the wearable gadget can function as the video input module 102.b, a mic associated with the wearable gadget can function as the audio input module 102.a, and a GPS module associated with the wearable gadget may function as the location identifier module 102.c. The sensor module 102.d can host any type of sensor based on the type of data to be collected for the purpose generating the update. For example, if temperature needs to be measured, a temperature sensor can be used.
[0018] In an embodiment, the data collection module 102 can be configured to communicate and synchronize with data collection modules 102 in other wearable gadgets involved in the same event. In an embodiment, by synchronizing operations of data collection modules in multiple wearable gadgets, the data collected by all the wearable gadgets involved in the event can be stored in the same storage location. In another embodiment, by synchronizing operation of data collection modules 102 in all wearable gadgets involved in an event, data collected during that event can be associated with a unique Id that represents the event, such that the auto update generation module 101 can access all relevant data to generate the activity update.
[0019] The auto update generation module 101 can be configured to collect at least one input required to generate an event summary, which in turn can be used as the automatic update for a social networking platform. The auto update generation module 101 can be further configured to identify, by processing the collected inputs, at least one person who is in the field of view of the wearable gadget. In an embodiment, each session in which the wearable gadget collects input and the auto update generation module 101 generates the automatic update is termed as an ‘event’, and the person/people involved is termed as “participant”. The terms ‘participant’, ‘people’, and ‘person’ are used interchangeably throughout the specification.
[0020] The auto update generation module 101 can be further configured to identify at least one activity in the event. The auto update generation module 101 can be further configured to at least one emotion of at least one participant of the event. The auto update generation module 101 can be further configured to generate a summary of the event (i.e. event summary), based on the identified participants, activity, emotions, location, time stamp pertaining to each of the identified events, and supporting data, wherein the supporting data can be image, video clips, and audio files. The auto update generation module 101 can be further configured to generate a social networking platform update from the event summary, which in turn can be posted on at least one social networking platform. The auto update generation module 101 can be configured to generate the social networking platform update by converting the event summary to a format that is supported by selected social networking platform(s) in which the update needs to be posted. In various embodiments, the auto update generation module 101 can be configured to automatically post the update on selected social networking platform(s) or to give an option for the user to manually post the generated update on selected social media platforms. FIG. 2 illustrates a block diagram which shows components of auto update generation module, as disclosed in the embodiments herein. The auto update generation module 101 further comprises of an Input/Output (I/O) interface 201, a memory module 202, a contact check module 203, an activity identification module 204, an emotion identification module 205, a graph generation module 206, a text summary generation module 207, and an update generation module 206.
[0021] The I/O interface 201 can be configured to provide at least one channel with at least one suitable communication protocol, to allow the auto update generation module 101 to communicate with the audio input module 102.a, video input module 102.b, and the location identifier module 102.c. The I/O interface 201 can be further configured to provide at least one option for the user to view the activity update generated by the auto update generation module 101. The I/O interface 201 can be further configured to provide at least one option for the user to configure login credentials of at least one social media website of the user, so that the generated activity update can be automatically uploaded/posted to the social media website. The I/O interface 201 can be further configured to provide at least one option for the user to review and edit the activity update generated by the auto update generation module 101, if needed. The I/O interface 201 can be further configured to provide at least one option for the user to manually post the generated activity update to at least one selected social media website.
[0022] The memory module 202 can be a volatile or a non-volatile storage space, and can be configured to store all or selected information required for the activity update generation purpose. For example, the memory module 202 can maintain a contact database. The contact database comprises of all available information pertaining to people in at least one contact list of the user, which has been configured by user. For example, the user can synchronize contact list of his/her social media websites, email accounts, mobile phone and so on, with the contact database, and information such as, but not limited to name, contact number, email address, photo, and social media link get stored in the contact database. In various embodiments, the data in the contact database is updated automatically, or based on manual trigger from the user. The memory module 202 further stores an activity database which possesses various information related to different activities, such that the auto update generation module 101 can identify type of activity, and state of activity, based on the information stored in the activity database. For example, the activity database may possess learnt models of different activities containing pictures, audio, video or processed inertial sensor data that represent various stages of selected activities. In various embodiments, the data in the contact database is updated automatically, or based on manual trigger from the user. The memory module 202 further stores an emotion database which possesses all data required to differentiate between different emotions of the user. For example, the emotion database may store data in any or all of images/audio/video formats so that the auto update generation module 101 can identify type of emotion of a user, based on the information stored in the emotion database.
[0023] The contact check module 203 can be configured to collect at least one real-time input collected by the data collection module 102, and identify at least one participant of the event. For example, assume that the input is an image. The contact check module 203 then processes the image, and detects different faces in the image. Further, the contact check module 203 compares each of the detected faces with the contact database, and identifies the participants. The contact check module 203 can be configured to receive inputs from the participant’s social contact database to identify people not in its own contact list, wherein the social contact database is synchronized with the contact database, by the user. The contact check module 203 can be further configured to prompt the user to add a new user to the contact list, if any of the detected faces is not present in the contact database. In an embodiment, the contact check module 203 can use any suitable algorithm for the purpose of processing the input and comparing the data with the data in the contact database.
[0024] The activity identification module 204 can be configured to identify, by processing at least one input collected by the data collection module 102, at least one activity being performed by at least one participant identified by the contact check module 203. In order to identify the activity, the activity identification module 204 extracts important features from the captured image, video, audio and sensors and uses the trained models stored in the activity database, to identify the activity. The type of sensor can vary, based on the type of input required by the auto update generation module 101. If any match is found, corresponding activity is identified as the activity being performed by the at least one participant of the event. In an embodiment, the activity identification module 204 can use any suitable algorithm for the purpose of processing the input and comparing the data with the data in the activity database.
[0025] The emotion identification module 205 can be configured to identify, based on data stored in the emotion database, at least one emotion of at least one participant identified by the contact check module 203. The emotion identification module 205 extracts important features from the captured image, video and recorded audio and uses the trained models stored in the emotion database to identify the emotions of each of the participants. If any match is found, corresponding emotion is identified as the emotion of at least one participant of the event. In an embodiment, the emotion identification module 205 can use any suitable algorithm for the purpose of processing the input and comparing the data with the data in the emotion database.
[0026] The graph generation module 206 can be configured to generate an activity-emotion-time graph, based on the data received from the contact check module 203, activity identification module 204, and the emotion identification module 205. The graph generation module 206 picks different images and/or video and/or audio corresponding to different activities and arranges them as per the time stamp and annotates them with the detected activity identified by the activity identification module 204. In an embodiment, the graph generation module 206 picks different images/audio/video on a random manner. In another embodiment, the graph generation module 206 picks the images/videos/audio based on any specific sequence, as pre-configured by the user or any other. The graph generation module 206 then picks images and/or video and/or audio corresponding to different emotions as identified by the emotion identification module 205 and arranges them as per the time stamp. By using the selected images/audio/video, the graph generation module 206 generates an activity-time graph and an emotion-time graph. The activity-time graph depicts identified activities that are arranged according to the time stamp. The emotion-time graph depicts identified emotions that are arranged according to the time stamp. Further, the graph generation module 206 combines the activity-time graph and the emotion-time graph, according to timestamps, to form an activity-emotion-time graph.
[0027] The text summary generation module 207 can be configured to generate a text summary, based on the data received from the contact check module 203, activity identification module 204, and the emotion identification module 205. The text summary generation module 207 creates a textual summary by inferring the outputs of the said modules; for example, concatenating the contacts from the contact check module 203, identifying dominant emotions of each participant from the emotion identification module 205 and presenting the emotion as a word such as happy, tense anxious and so on, presenting the activity from the activity identification module 204 as a word(s) such as drinking coffee, laughing, learning craft, learning exercise and so on, and identifying the location from the location identifier module 102.c, and combining the textual data in the form of a summarized statement such as "Haresh feeling happy with Siddhartha and Vivek drinking coffee at Cafe Coffee Bar, Marathalli, Bangalore." or "Sreeja learning to walk, Siddhartha feeling happy and tense!".
[0028] The update generation module 208 can be configured to collect the outputs of the graph generation module 206, and the text summary generation module 207; and combine the text summary and the activity-emotion-time graph, to generate an activity summary. The update generation module 208 can be further configured to generate a social networking platform update from the activity summary, which in turn can be posted as the social update on at least one social page of the user, which has been linked with the update generation module 206. In an embodiment, the update generation module 208 posts the update, automatically, as soon as the update has been generated. In another embodiment, the update generation module 208 presents the update to the user for review, with required edit permissions, and posts the update, after receiving approval from the user.
[0029] FIG. 3 illustrates a flow diagram which shows steps involved in the process of generating an update for a social networking platform, using the auto updation system, as disclosed in the embodiments herein. The auto updation system 100 initially collects at least one real-time input using the data collection module 102. If more than one wearable gadget is used in the same event, working of data collection modules 102 of the wearable gadgets can be synchronized, so that the auto update generation module 101 can access data collected by all the data collection modules 102, from same or different storage locations, for the purpose of generating the activity update.
[0030] Further, by processing the input, the auto update generation module 101 in the auto updation system 100 identifies (302) at least one participant of the event, based on the data stored in the contact database. In a scenario in which the auto updation system 100 is associated with a wearable gadget is used by at least one participant, the participants of the event can be all people in the field of view of the wearable gadget.
[0031] The auto update generation module 101 further identifies (304) at least one activity being performed at the event, based on the data in the activity database. In an embodiment, the auto update generation module 101 identifies activities being performed, specific to the event. For example, assume that the event is a teacher teaching a student, and then the auto update generation module 101 identifies ‘teaching’ as the activity. In another embodiment, the auto update generation module 101 identifies activities being performed, specific to participants of the event. For example, in the above mentioned teacher-student scenario, the auto update generation module 101 identifies ‘teaching’ as the event specific to the teacher, and ‘learning’ as the event specific to the student.
[0032] The auto update generation module 101 further generates (306) an activity summary of the event, based on at least one of the parameters such as but not limited to the participants, and activity identified. The auto update generation module 101 further uses the audio/video/image data collected by the data collection module 102.
[0033] Further, from the activity summary, a social media platform update can be generated (308). In this process, the auto update generation module 101 can convert the activity summary to at least one suitable format that is supported and/or recognized by the social media platform in which the update needs to be posted. In an embodiment, the auto update generation module 101 automatically posts the generated social media platform update, to selected social media websites, as pre-configured by the user. In another embodiment, the auto update generation module 101 provides an option for the user to review the generated status update, and make changes if required, using suitable editing permissions and options. Further, the user may choose to post or not post on selected social media websites.
[0034] The various actions in method 300 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 3 may be omitted.
[0035] FIG. 4 illustrates a flow diagram which shows steps involved in the process of generating an activity summary, using the auto updation system, as disclosed in the embodiments herein.
[0036] After identifying the particpants and the activity, the auto update generation module 101 further identifies (402) at least one emotional parameter of at least one identified participant of the event. For example, in the teacher-student scenario mentioned above, the teacher and student may be smiling/laughing at various time instances, or the teacher may be angry and shouting at a different time instance. The auto update generation module 101 further collects information pertaining to location where the event took place. The auto update generation module 101 also collects (404) time stamp information pertaining to the event and each of the identified activities. In an embodiment, the time stamp information is recorded by the data collection module 102, as and when the data is collected.
[0037] Further, an activity-emotion-time graph is generated (406) by the graph generation module 206, by picking different images and/or video and/or audio corresponding to different activities, arranging them as per the time stamp, and by annotating them with the activity identified by the activity identification module 204. In various embodiments, the data for generating the activity-emotion-time graph is picked up in a random order, or as per a sequence pre-defined by a user. In this process, an activity-time graph may be generated in which the activities are arranged according to the time stamp. Further, an emotion-time graph is generated in which identified emotions are arranged according to the time stamp. Further, the graph generation module 206 combines the activity-time graph and the emotion-time graph, according to timestamps, to form the activity-emotion-time graph.
[0038] Further, a text summary is generated (408) by the text summary generation module 207, based on the data received from the contact check module 203, activity identification module 204, and the emotion identification module 205, wherein the text summary represents the data pertaining to the event, in a textual format. The text summary generation module 207 creates a textual summary by inferring the outputs of the said modules; for example, concatenating the contacts from the contact check module 203, identifying dominant emotions of each participant from the emotion identification module 205 and presenting the emotion as a word such as happy, tense anxious and so on, presenting the activity from the activity identification module 204 as a word(s) such as drinking coffee, laughing, learning craft, learning exercise and so on, and identifying the location from the location identifier module 102.c, and combining the textual data in the form of a summarized statement. Examples of the text summary that can be generated by the text summary generation module are: "Haresh feeling happy with Siddhartha and Vivek drinking coffee at Cafe Coffee Bar, Marathalli, Bangalore.", "Sreeja learning to walk, Siddhartha feeling happy and tense!".
[0039] Further, a consolidated summary of the event is generated (410) by merging the activity-emotion-time graph, and the text summary. The consolidated summary serves as the activity summary. The various actions in method 400 may be performed in the order presented, in a different order or simultaneously. Further, in some embodiments, some actions listed in FIG. 4 may be omitted.
Use-Case Scenario 1:
[0040] In the scenario 1, assume that 4 friends are chatting at a restaurant, having coffee. Now, one of them is wearing a smart glass, and the smart glass is equipped with at least one data collection module to collect at least one input such as, but not limited to an audio, video, text, image, location, and time. The smart glass, by processing at least one of the collected inputs, identifies participants of the event, and emotion of at least one of the participants. The smart glass further identifies emotional state of the participants of the event, and also identifies at least one activity (for example, drinking coffee) being performed by the participants. The smart glass further generates a summary of the event, based on the identified participants, activities, emotions, and other parameters such as time, location and so on. The user can be given option to prompt the smart glass to add media files as part of the summary. The summary can be updated as status on their social networking websites. Assuming that the participants of the event are Aishwarya, Kanmani, Rakesh and Som, the summary would be:
“Aishwarya, Kanmani, Rakesh and Som having a great time at Dominos near Yelachenahalli”
Use-Case Scenario 2:
[0041] In this scenario, mom teaches craft work to her kid. The smart glass that the mom is wearing captures different inputs such as but not limited to audio, video, location, time, and image. Further, by processing the inputs, the smart glass identifies the activity as teaching, and further identifies sub-activities and corresponding timestamp information. Here, “sub-activities” refer to steps/different stages of the activity (learning, coloring, finishing touches and so on). The smart glass may detect age of the kid by referring to data in any social networking profile of the mom, and the summary would be:
“Teaching craft to my 4 year old kid..Feeling awesome J ”
[0042] The smart glass may, before while presenting the summary to the user (in this scenario ‘mom’), may also allow the user to opt to add media files that represent various stages of the event, along with the corresponding time stamp. A sample time stamp is depicted below:

-------------?---------------?------------------?----------------?---------------?
Learning Colouring Finishing touches Happy Proud
10.00 AM 10.20 AM 10.55 AM 11.AM 11.05 AM
[0043] The embodiments disclosed herein can be implemented through at least one software program running on at least one hardware device and performing network management functions to control the network elements. The network elements shown in Fig. 1 include blocks which can be at least one of a hardware device, or a combination of hardware device and software module.
[0044] The embodiments disclosed herein specify a system for automatically generating activity updates for social media websites. The mechanism allows real time generation of activity updates, providing a system thereof. Therefore, it is understood that the scope of protection is extended to such a system and by extension, to a computer readable means having a message therein, said computer readable means containing a program code for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The method is implemented in a preferred embodiment using the system together with a software program written in, for ex. Very high speed integrated circuit Hardware Description Language (VHDL), another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including, for ex. any kind of a computer like a server or a personal computer, or the like, or any combination thereof, for ex. one processor and two FPGAs. The device may also include means which could be for ex. hardware means like an ASIC or a combination of hardware and software means, an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means are at least one hardware means or at least one hardware-cum-software means. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. Alternatively, the embodiment may be implemented on different hardware devices, for ex. using a plurality of CPUs.
[0045] The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of preferred embodiments, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.

CLAIMS
What is claimed is:
1. A method for generating an automatic update for a social networking platform from a wearable device, said method comprising:
identifying at least one person in a field of view of said wearable device, by an activity update generation module;
identifying at least one activity being performed by said at least one person, by said activity update generation module;
generating an event summary for said at least one person and said at least one activity, by said activity update generation module; and
generating a social networking platform update for said generated event summary, by said activity update generation module.
2. The method as claimed in claim 1, wherein said at least one person is identified based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with a contact database.
3. The method as claimed in claim 1, wherein said at least one activity is identified based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with an activity database.
4. The method as claimed in claim 1, wherein generating said event summary further comprises of:
identifying at least one emotional parameter of said at least one person, by said activity update generation module;
identifying location of said at least one person, by said activity update generation module;
identifying timestamp pertaining to occurrence of at least one of said identified activity and said emotional parameter, by said activity update generation module;
generating an activity-emotion-time graph, based on at least one of said activity, emotional parameter, location, and timestamp, by said activity update generation module;
generating a text summary, based on at least one of said activity, emotional parameter, location, and timestamp, by said activity update generation module; and
generating a social update, based on said activity-emotion-time graph, and said text summary, by said activity update generation module.
5. The method as claimed in claim 4, wherein said at least one emotional parameter of said at least one person is identified based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with an emotion database.
6. The method as claimed in claim 4, wherein said social update is at least one of a text, an audio, a video, an image, and a combination thereof.
7. A system for generating an automatic update for a social networking platform from a wearable device, said system configured for:
identifying at least one person in a field of view of said wearable device, by an activity update generation module;
identifying at least one activity being performed by said at least one person, by said activity update generation module;
generating an event summary for said at least one person and said at least one activity, by said activity update generation module; and
generating a social networking platform update for said generated event summary, by said activity update generation module.
8. The system as claimed in claim 7, wherein said activity update generation module is further configured to identify said at least one person by comparing at least one of an audio, video, and sensor input, pertaining to said event, with a contact database, by a contact check module of said activity update generation module.
9. The system as claimed in claim 7, wherein said activity update generation module is further configured to identify said at least one activity by comparing at least one of an audio, video, and sensor input, pertaining to said event, with an activity database, by an activity identification module of said activity update generation module.
10. The system as claimed in claim 7, wherein said activity update generation module is further configured to generate said event summary by:
identifying at least one emotional parameter of said at least one person, by an emotion identification module of said activity update generation module;
identifying location of said at least one person, based on at least one input from a location identifier module, by said activity update generation module;
identifying timestamp pertaining to occurrence of at least one of said identified activity and said emotional parameter, by said activity update generation module;
generating an activity-emotion-time graph, based on at least one of said activity, emotional parameter, location, and timestamp, by a graph generation module of said activity update generation module;
generating a text summary, based on at least one of said activity, emotional parameter, location, and timestamp, by a text summary generation module of said activity update generation module; and
generating a social update, based on said activity-emotion-time graph, and said text summary, by an update generation module of said activity update generation module.
11. The system as claimed in claim 10, wherein said emotion identification module is further configured to identify said at least one emotional parameter of said at least one person, based on at least one of an audio, video, and sensor inputs, wherein said at least one audio, video, and sensor inputs is compared with an emotion database.
12. The system as claimed in claim 10, wherein said update generation module is configured to generate said social update as at least one of a text, an audio, a video, an image, and a combination thereof.
Date: 9th June 2015 Signature:
Kalyan Chakravarthy
ABSTRACT

Method and system for automatically generating activity updates. The system collects at least one of an audio, video, text, and/or image data pertaining to an event. The system further processes the data and identifies participants, activities, emotions, location, and time at which each of the identified activities took place. The system further generates an event summary based on the collected data. Further, a social networking platform update is generated from the event summary, which can be posted on selected social media websites. The system also gives options for the user to review and edit the summary, if required, before posting on the social media websites.

FIG. 3

Documents

Application Documents

# Name Date
1 2904-CHE-2015-IntimationOfGrant25-05-2023.pdf 2023-05-25
1 Samsung_SRIB-20140610-013_CS_FORM 2.pdf 2015-06-24
2 Form5.pdf 2015-06-24
2 2904-CHE-2015-PatentCertificate25-05-2023.pdf 2023-05-25
3 FORM3.pdf 2015-06-24
3 2904-CHE-2015-Annexure [13-01-2023(online)].pdf 2023-01-13
4 Drawings_CS.pdf 2015-06-24
4 2904-CHE-2015-Written submissions and relevant documents [13-01-2023(online)].pdf 2023-01-13
5 2904-CHE-2015-FORM-26 [15-03-2018(online)].pdf 2018-03-15
5 2904-CHE-2015-Annexure [23-12-2022(online)].pdf 2022-12-23
6 2904-CHE-2015-FORM-26 [16-03-2018(online)].pdf 2018-03-16
6 2904-CHE-2015-Correspondence to notify the Controller [23-12-2022(online)].pdf 2022-12-23
7 2904-CHE-2015-US(14)-HearingNotice-(HearingDate-29-12-2022).pdf 2022-11-29
7 2904-CHE-2015-FER.pdf 2020-02-18
8 2904-CHE-2015-RELEVANT DOCUMENTS [17-08-2020(online)].pdf 2020-08-17
8 2904-CHE-2015-ABSTRACT [17-08-2020(online)].pdf 2020-08-17
9 2904-CHE-2015-PETITION UNDER RULE 137 [17-08-2020(online)].pdf 2020-08-17
9 2904-CHE-2015-CLAIMS [17-08-2020(online)].pdf 2020-08-17
10 2904-CHE-2015-CORRESPONDENCE [17-08-2020(online)].pdf 2020-08-17
10 2904-CHE-2015-OTHERS [17-08-2020(online)].pdf 2020-08-17
11 2904-CHE-2015-DRAWING [17-08-2020(online)].pdf 2020-08-17
11 2904-CHE-2015-FER_SER_REPLY [17-08-2020(online)].pdf 2020-08-17
12 2904-CHE-2015-DRAWING [17-08-2020(online)].pdf 2020-08-17
12 2904-CHE-2015-FER_SER_REPLY [17-08-2020(online)].pdf 2020-08-17
13 2904-CHE-2015-CORRESPONDENCE [17-08-2020(online)].pdf 2020-08-17
13 2904-CHE-2015-OTHERS [17-08-2020(online)].pdf 2020-08-17
14 2904-CHE-2015-CLAIMS [17-08-2020(online)].pdf 2020-08-17
14 2904-CHE-2015-PETITION UNDER RULE 137 [17-08-2020(online)].pdf 2020-08-17
15 2904-CHE-2015-ABSTRACT [17-08-2020(online)].pdf 2020-08-17
15 2904-CHE-2015-RELEVANT DOCUMENTS [17-08-2020(online)].pdf 2020-08-17
16 2904-CHE-2015-FER.pdf 2020-02-18
16 2904-CHE-2015-US(14)-HearingNotice-(HearingDate-29-12-2022).pdf 2022-11-29
17 2904-CHE-2015-Correspondence to notify the Controller [23-12-2022(online)].pdf 2022-12-23
17 2904-CHE-2015-FORM-26 [16-03-2018(online)].pdf 2018-03-16
18 2904-CHE-2015-Annexure [23-12-2022(online)].pdf 2022-12-23
18 2904-CHE-2015-FORM-26 [15-03-2018(online)].pdf 2018-03-15
19 Drawings_CS.pdf 2015-06-24
19 2904-CHE-2015-Written submissions and relevant documents [13-01-2023(online)].pdf 2023-01-13
20 FORM3.pdf 2015-06-24
20 2904-CHE-2015-Annexure [13-01-2023(online)].pdf 2023-01-13
21 Form5.pdf 2015-06-24
21 2904-CHE-2015-PatentCertificate25-05-2023.pdf 2023-05-25
22 Samsung_SRIB-20140610-013_CS_FORM 2.pdf 2015-06-24
22 2904-CHE-2015-IntimationOfGrant25-05-2023.pdf 2023-05-25

Search Strategy

1 TPOsearch_17-02-2020.pdf

ERegister / Renewals

3rd: 23 Aug 2023

From 10/06/2017 - To 10/06/2018

4th: 23 Aug 2023

From 10/06/2018 - To 10/06/2019

5th: 23 Aug 2023

From 10/06/2019 - To 10/06/2020

6th: 23 Aug 2023

From 10/06/2020 - To 10/06/2021

7th: 23 Aug 2023

From 10/06/2021 - To 10/06/2022

8th: 23 Aug 2023

From 10/06/2022 - To 10/06/2023

9th: 23 Aug 2023

From 10/06/2023 - To 10/06/2024

10th: 10 Jun 2024

From 10/06/2024 - To 10/06/2025

11th: 13 May 2025

From 10/06/2025 - To 10/06/2026