Sign In to Follow Application
View All Documents & Correspondence

Creating And Propagating Annotated Information

Abstract: Content may be collected annotated and propagated in a unified process. In one example a mobile device such as a smart phone is used to collect information. The information may be text video audio etc. The information may be sent to a reaction service which may return an annotation of the information. The annotation may be attached to the information to create an annotated document. The annotated document may be communicated to other users. Additionally the annotated document may be stored in a way that associated the annotated document with the user who created or captured the information. The ability to capture information obtain annotations to the information and propagate the annotated information may facilitate the creation of social media such as social network postings or online photo albums.

Get Free WhatsApp Updates!
Notices, Deadlines & Correspondence

Patent Information

Application #
Filing Date
03 September 2012
Publication Number
10/2014
Publication Type
INA
Invention Field
COMPUTER SCIENCE
Status
Email
Parent Application
Patent Number
Legal Status
Grant Date
2023-03-21
Renewal Date

Applicants

MICROSOFT CORPORATION
One Microsoft Way Redmond Washington 98052 6399

Inventors

1. AGUERA Y ARCAS Blaise H.
c/o Microsoft Corporation LCA International Patents One Microsoft Way Redmond Washington 98052 6399
2. FYNN Scott V.
c/o Microsoft Corporation LCA International Patents One Microsoft Way Redmond Washington 98052 6399
3. MACLAURIN Matthew Bret
c/o Microsoft Corporation LCA International Patents One Microsoft Way Redmond Washington 98052 6399
4. BENNETT Eric Paul
c/o Microsoft Corporation LCA International Patents One Microsoft Way Redmond Washington 98052 6399
5. COLANDO Christian James
c/o Microsoft Corporation LCA International Patents One Microsoft Way Redmond Washington 98052 6399

Specification

CREATING AND PROPAGATING ANNOTATEDINFORMATION
BACKGROUND
[0001] Computers, sma rt phones, and othe r types of devices are used t o
perform various types of actions. Some of these actions incl ude initiating sea rches,
col lecti ng and orga nizing information, and sending and receivi ng messages. Additiona lly,
many devices are multi-fu nction devices - e.g., a sma rt phone may f unction as a voice
and data communication device, and as a camera . The increasing num ber of functions
that can be implemented on one device, and the increasing availa bi lity of connectivity to
these devices, allows people t o perform many different functions using one device. For
exa mple, in the past, posti ng a photo t o a social network involved taki ng the photo with a
camera and then uploadi ng it t o the socia l network using a com puter. Now, a person may
t ake a picture on a sma rt phone, and then may post the picture t o his socia l networking
account from the phone.
[0002] While people often perform a sequence of actions that are related to
each other (e.g., doing a sea rch on a sma rt phone, and then e-mailing others the results
of the sea rch), the platforms on which people perform these related actions often treat
the actions as being disjoint. A person can t ake a photo, perform an image sea rch related
t o the photo, and post t o a socia l network about a photo, all from a sma rt phone.
However, the person who performs these actions typica lly views the different actions as
sepa rate events, often involving sepa rate pieces of softwa re. Pa rt of the reason for which
these actions are viewed as sepa rate is that the loca l and remote softwa re infrastructure
does not su pport linking these actions together. Different actions can be part of a si ngle
data flow. For example, sea rching for a resta ura nt and then writi ng a socia l network post
about the resta ura nt are part of a si ngle seq uence of actions concerni ng a single concept
(i .e., the resta ura nt). But the softwa re that is used t o perform these different actions
often fails t o support the lin kage between these actions.
SUMMARY
[0003] The creation, annotation, and propagation of information may be
performed as part of a unified process. Such a process may facilitate the flow of
information as socia l media .
[0004] Ca rryi ng a process to create, annotate, and propagate data may begi n
with the creation of a document. A document may constitute any type of information,
such as text, images, sound, etc. For example, a two- or three-word query may be a sma ll
text document. Or, a digita l photogra ph may be an image document. Once such a
document is created, it may be sent to a reaction service, which reacts to the document
in some manner. For example, the reaction service may attem pt to provide information
relating to the docu ment. A sea rch engi ne that reacts to a query may be one facet of a
reaction service. However, a reaction service may take other types of actions. For
example, a reaction service may react to a photogra ph by attem pting to identify a person
or object in the photogra ph. Or, a reaction service may react to a sou nd recording by
attem pti ng to determine whether the recordi ng is of a known song. Once the reaction
service reacts to the document, it provides information in response.
[0005] The information that is provided in response to the document may be
viewed as annotations to the docu ment. For example, if one enters a text query such as
"Morocca n food", any sea rch results (e.g., the names, addresses, and Uniform Resource
Locators ("U RLs") of one or more Morocca n resta urants) may be viewed as annotations
to the query. Or, if the document is an image of a statue, then the reaction service might
identify the statue shown in the image, so the name of the statue may be an annotation .
The docu ment and its annotations may form part of an annotated docu ment.
[0006] A user may use the annotated document in various ways. For example,
the user may decide to attach some of the annotations to the document as metadata .
Thus, if a user takes a photo of a famous statue, the reaction service may provide the
name of the statue. That name may then become part of the metadata for the photo.
Additiona lly, the user may decide to propagate the document and/or some or all of its
annotations in some manner. For example, once the photo mentioned above has been
annotated with the name of the statue in the photo, that photo and its annotation can be
sent to an online photo album . Or, the user could make the photo and its annotation part
of a status post in a socia l network. Softwa re on a user's device may facilitate the process
of obtai ning a reaction to a docu ment, determini ng what annotations to associate with
the document, and propagati ng the document to other places.
[0007] In one example, the process of creating a document and obtaining a
reaction to that document takes place on a mobile device, such as a smart phone or
handheld computer. Software installed on the mobile device may help the user to obtain
a reaction t o data that has been created on the device. For example, the provider of a
reaction service might provide an application that can be installed on a phone. If the user
takes a photo, the application may provide an on-screen button that the user can click to
send the photo to the reaction service, and t o obtain annotations t o the photo from the
reaction service. The application could provide similar capabilities for text, sound, or any
other type of information. Moreover, the application may facilitate the process of
propagating or communicating the document and its annotations. For example, the
application could create drafts of social network posts or e-mails for the user's approval.
Or, the application could send annotated photos t o online photo albums. In this sense,
the application may facilitate the creation of social media using both information that is
captured on the user's device (the document), and information that is provided by a
remote service (the annotations).
[0008] This Summary is provided to introduce a selection of concepts in a
simplified form that are further described below in the Detailed Description. This
Summary is not intended to identify key features or essential features of the claimed
subject matter, nor is it intended t o be used t o limit the scope of the claimed subject
matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] FIG. 1 is a block diagram of an example scenario in which a device may
collect information, and in which a reaction service may react t o the information.
[0010] FIG. 2 is a block diagram of some example details of documents,
annotations, and how annotated documents may be used.
[0011] FIG. 3 is a flow diagram of an example process in which documents may
be created, annotated, and/or propagated.
[0012] FIGS. 4-6 are block diagrams of example scenarios in which social media
may be created on a device.
[0013] FIG. 7 is a block diagram of example components that may be used in
connection with implementations of the subject matter described herein.
DETAILED DESCRIPTION
[0014] Computers and other devices are often used t o perform actions such as
initiating searches, collecting and organizing information, and sending and receiving
messages. People type queries into search engines t o request information. They take
pictures with smart phone, or upload pictures t o their computers from standalone
cameras. They capture and transmit audio information with microphones. They send email,
post information t o blogs or social networks, and post photos t o photo-sharing
sites. Normally, these actions are viewed as being conceptually separate. Many people
consider performing a search as being an entirely separate action from posting to a social
network, or taking a picture, or recording a sound. In some cases, these views are
reflected in, or reinforced by, the use of different devices t o perform the actions. For
example, a person might use his or her desktop computer t o organize albums of photos
uploaded from a standalone camera. That same person might use a browser on a smart
phone visit a search engine in order t o find out information about an object that appears
in one of the photos.
[0015] However, trends in computing suggest ways t o unify many of the
actions that people perform on their devices. One trend is that small devices are more
capable than they have been in the past. They continue to become more capable, and
connectivity of these devices continues to improve. Wireless phones and music players
often have cameras, large amounts of memory and storage, and enough processing
power t o run significant operating systems and applications. Connectivity between these
devices and the rest of the world is faster and cheaper than it has been in the past.
Cellular networks now support high speed data transmission, and many devices can
switch between cellular communication and faster and cheaper WiFi networks, when
WiFi networks are available. Many devices have cameras whose quality rivals that of
standalone cameras. For these reasons, wireless phones and other small devices may
become the principal type of devices that people use t o capture information and t o
interact with the world.
[0016] If small devices are the focal point for users to interact with the world,
this fact suggests new paradigms of how t o view information, and new systems and
techniques that can be built around those paradigms. In one example, it becomes
convenient t o think of any information that can be captured on the device as kind of
document, which can be reacted to by a remote service. Moreover, it becomes
convenient to think of the reaction itself as a kind of annotation to the document. These
documents and their annotations can be viewed of as a form of social media. These social
media can be associated with the users who create them, and can be communicated to
others, in the same way as other social media.
[0017] For example, a text query to a search engine can be viewed as a small
document (possibly a two- or three-word document), the process of generating search
results can be viewed as a reaction to that document, and the results themselves can be
viewed as annotations to that document. This set of analogies simply applies labels to the
actions that are performed in the course of carrying out a search. But these analogies
suggest ways to use the information that is contained in a search, as well as information
about the circumstances surrounding the search. For example, if a person searches for
"Moroccan food" on his mobile phone at six in the evening from downtown Seattle (as
determined by the phone's clock and location technology), then it can be inferred that
the person wants to eat dinner at a Moroccan restaurant in Seattle. The fact that the
search has taken place, and its results, can be packaged as a social network post. For
example, in addition to returning a result like "Marrakesh Restaurant", this result can
also be packaged in the form of a message like "Tim is eating at Marrakesh Restaurant in
Seattle", which can be posted to a social network, placed in an on-line diary of
restaurants at which Tim has eaten, or can be used in any other way. In other words, the
fact that Tim is searching for a Moroccan restaurant in Seattle is combined with some
other information that comes from a remote reaction service (which may be located in
"the cloud"), and that combined information may be propagated, in whole or in part, as a
piece of social media.
[0018] In addition to searches, other types of interactions with a small device
can be used in ways similar to that described above. For example, a user could use the
camera on a smart phone to take a photo. The photo itself, along with information
concerning where and when the photo was taken, could be sent to a reaction service.
The user might send the photo as a type of query in which the user asks the reaction
service to identify the object in the photo, or software on the device might be configured
to ask the reaction service to provide any information it can whenever any data is
captured by the device. The reaction service could then react to the image and other
information by identifying the object in the photo. (E.g., the service could respond by
saying, "This is a picture of the Fremont Troll in Seattle," which the service might
determine based on the location at which the photo was taken, and by comparing the
captured image with other pictures of the Fremont Troll.) In this sense, the photo is a
document, and the identification of the object in the photo is an annotation (or part of
an annotation) to the photo. The photo, its annotation(s), and/or information based on
the annotations can then be propagated and/or stored. For example, the photo, and the
annotation identifying the photo, can be sent to an on-line photo-sharing service for
storage in one of the user's photo albums. Or, an e-mail or social networking post
concerning the phone (e.g., "Tim is in Seattle and found the Fremont Troll") can be
created and send through the appropriate communication channels.
[0019] One way to implement the foregoing scenarios is to install a type of
client software on a device that allows users to request a reaction to any type of input.
For example, an information service provider might operate a type of service that stores
a database of indexed information, where the service can use the information in the
database to react to various types of input. The service might run server-side programs
that receive a piece of input and that canvass the database to determine what is known
about the input. A search engine is a limited example of this type of service, in the sense
that search engines contain text indices on text data, image data, video data, etc., which
can be used to react to text queries. However, a more general reaction service could take
an arbitrary piece of data (e.g., text, image, video, audio, etc.), and could evaluate the
data in any appropriate manner to determine what is known about the data. The
reaction service can then provide its reaction. An information service provider that
provides this type of service may provide a client application to be installed on mobile
phones and other types of device. When a user collects any information on the device
(whether through keyboard input, camera input, microphone input, etc.), the user may
invoke the client application on that input. The client application may then send the
input, and possibly any related information - such as the time the input was captured, or
the location of the device at the time the input was captured - to the reaction service.
The client application may then combine the original input and the reaction into an
annotated document. The client application may further facilitate the storage and/or
communication of the original input and annotations collected from outside the device.
For example, the client application could be used t o store a photo in a photo album, or t o
compose and send a social network post, as in the examples described above.
[0020] Turning now to the drawings, FIG. 1 shows an example scenario in
which a device may collect information, and in which a reaction service may react to the
information. In the scenario shown, device 102 is used to collect information, such as
text, images, audio, etc. Device 102 may be a wireless telephone, a music player, a video
player, a handheld computer, or may be a device that implements any combination of
these functions. In one example, device 102 is a "smart phone" that performs various
voice and data communication functions and that also runs various types of software
applications. However, device 102 could be any type of device.
[0021] Device 102 may contain various types of components. Some of these
components are shown in FIG. 1. Screen 104 may display text and images, any may also
have tactile sensing capabilities t o allow screen 104 t o function as an input device.
Keyboard 106, or some other type of user input mechanism, may allow a user t o input
text. Keyboard 106 may be implemented as buttons on device 102, or may be
implemented as a "virtual keyboard" on screen 104, if screen 104 provides tactile-sensing
capabilities. Microphone 108 captures audio information. Camera 110 captures visual
information, and may be used t o capture still and/or moving images. Speaker 112
provides audio output. Device 102 may have components that allow it t o communicate
with the world outside of device 102. For example, device 102 may be equipped with a
cellular radio 116 and/or a WiFi radio 114. Cellular radio 116 allows device 102 t o
communicate with cellular telephone networks. WiFi radio allows device 102 t o
communicate with a wireless router or wireless access point, which may allow device 102
t o communicate through networks such as the internet. Device 102 might have one type
of radio but not the other. Or, in another example, device 102 has both kinds or radios
(and possibly other types of communication connections, such as a Bluetooth radio, an
Ethernet port, etc.), and may switch between different types of communication
depending on what communication facilities are available.
[0022] Device 102 may communicate with reaction service 118. Reaction
service 118, as described above, may receive some type of document 120 (e.g., text,
images, audio, etc.), may attempt t o determine what is known about that data, and may
react t o that data by providing some type of annotation 122 t o the data. For example,
reaction service 118 may provide a text search engine 124 that identifies text documents,
images, audio files, etc., that relate in some way t o a text query. Reaction service 118
may provide an image comparator 126 that compares an input image to known images,
or an audio comparator 128 that compares an input sound t o known sounds. Reaction
service 118 may contain database 130, which contains indices of various types of
information in order t o allow text search engine 124, image comparator 126, and audio
comparator 128 t o react t o document 120. Thus, in one example, document 120 contains
a text query and the annotations that are sent in reaction to the text query are a set of
search results (e.g., text documents, images, audio files, etc., that are in some way
related t o the text query). In other examples, document 120 represents an image or a
sound, and the annotations that are sent in reaction to the document are information
about the image or sound, such as an identification of what or who appears to be shown
in the image, or the name of a song or other performance that the sound appears to
come from. These are some examples of data that could be provided t o reaction service
118. However, in general, any type of data could be provided t o reaction service 118, and
reaction service 118 could react to that data in any manner.
[0023] Device 102 may have some computing capability. One type of
computing capability is the ability to acquire and run applications. In the example of FIG.
1, device 102 is shown as running client application 132. Client application 132 is an
application that helps the user of device 102 t o use reaction service 118. Document 120
may be captured on device 102, where document 120 could take any form (e.g., text,
images, audio, etc.). Upon receiving a user instruction, client application 132 may send
document 120 t o reaction service 118. For example, client application 132 might provide
a user interface 134, which contains a search box 136 and a search button 138 (which, in
this example, is indicated by a magnifying glass symbol). When a user enters text into
search box 136 and clicks search button 138, the text in search box 136 becomes
document 120, and client application 132 sends this data to reaction service 118. As
another example, a user might have used camera 110 on device 102 t o capture image
140 (which, in this example, is an image of the "Oval with Points" sculpture). In this case,
device 102 may display the captured image on screen 104. Client application 132 may
display search button 138 with the image, so that the user's clicking search button 138
causes image 140 t o be sent t o reaction service 118. In this case, image 140 becomes
document 120.
[0024] In response t o sending document 120 to reaction service 118, client
application 132 may receive, from reaction service 118, an annotation 122 t o document
120. As described above, annotation 122 might be a set of search results, an
identification of an image, an identification of a sound, or any other appropriate type of
information. Client application 132 may present annotation 122 t o a user, but may also
help the user to take some further action in response t o the annotation. For example,
client application 132 might propose a social network status post that is related to the
data and/or its annotation (e.g., "Tim is eating Moroccan food", or "Tim found the
Fremont Troll statue"). Or, client application 132 might compose an e-mail, post an
image t o a photo-sharing site, or provide a link t o purchase a commercially-available
recording of the song that reaction service 118 has identified. Client application 132
might also allow a user t o look at annotations and t o provide an indication of which
annotations the user wants t o associate with document 120 as metadata.
[0025] FIG. 2 shows some example details of documents, annotations, and
how annotated documents may be used. User 202 may be a person who carries device
102. As noted above, device 102 may be a wireless telephone, music player, handheld
computer, etc. Moreover, device 102 may have input devices such as keyboard 106,
camera 110, and microphone 108. Using one or more of these inputs device, user 202
may create document 120, which could be text, an image, a sound, or any combination of
these or other components. As discussed above, it is convenient t o think of a document
as encompassing any type of content that can be reacted t o in some manner. Thus, even
a small amount of text (e.g., a one- or two-word search query 204) is a document. Still
image 206, video 208, or sound recording 210 are other examples of documents.
[0026] Device 102 may send document 120 to reaction service 118. Device 102
may use an application (e.g., client application 132, shown in FIG. 1) to send document
120 t o reaction service 118, but document 120 could be sent in any manner. For
example, user 202 might simply open a browser on device 102 and visit a web site of
reaction service 118, thereby sending document 120 (e.g., a search query) to reaction
service 118 through that web interface.
[0027] Reaction service 118 reacts t o document 120 in some manner - e.g., by
performing a search, identifying an image or audio clip, etc. - and generates annotation
122 based on that reaction. For example, if the reaction is t o perform a search, then
annotation 122 may contain one or more search results. Or, if the reaction is t o identify
an image, then annotation 122 may be a text string that identifies an object or person in
the image.
[0028] When annotations are returned t o device 102, an annotated document
212 may be produced. Annotated document 212 may be generated by a client
application (e.g., client application 132, shown in FIG. 1), but could be produced by any
component(s). Annotated document 212 may contain the original document 120 and
annotation 122. Annotations may contain, for example, a set of search results (block
214), an identification of an image or sound (block 216), or some other type of reaction
(block 218).
[0029] Once the annotated document 212 is created, various actions can be
performed with respect to that annotated document. In one example, the annotated
document (or part of the annotated document, or some of the annotations) may be
propagated (block 220) t o places other than device 102. Using the above example of a
user who is searching for a Moroccan restaurant, once the search results have identified
such a restaurant, the user might want t o post, t o a social network, the fact that he or
she is eating at that restaurant. Or, as another example, if a user takes a photo and
reaction service 118 annotates the photo by identifying the object shown in the photo,
the user might want t o post the photo itself, and the identification of what is in the
photo, t o an album in an online photo-sharing service. These are some examples of how
information contained in the annotated document may be propagated to a location
outside of device 102. An application (e.g., client application 132, shown in FIG. 1) may
assist with this propagation (e.g., by preparing drafts of social network posts for user
202's approval), but the propagation could be performed by any components.
[0030] Another action that may happen with regard to annotated document is
that the association between the annotated document and the identity of its creator may
be retained in some manner (block 222). For example, normally when users create
queries, the queries simply disappear after they have been answered. However, when a
query is viewed as a document that can be reacted to, the query can be associated with
the user 202 who created the query, and this association can persist after the query has
been answered. Similarly, if user 202 captures a photo and asks reaction service 118 to
react t o that photo, the photo can be associated with user 202 (e.g., by storing the photo
in an online album that belongs t o user 202), and this association can persist after the
query is answered.
[0031] FIG. 3 shows, in the form of a flow chart, an example process in which
documents may be created, annotated, and propagated. Before turning t o a description
of FIG. 3, it is noted that the flow diagram of FIG. 3 is described, by way of example, with
reference t o components shown in FIGS. 1 and 2, although the process of FIG. 3 may be
carried out in any system and is not limited to the scenario shown in FIGS. 1 and 2.
Additionally, the flow diagram in FIG. 3 shows an example in which stages of a process
are carried out in a particular order, as indicated by the lines connecting the blocks, but
the various stages shown in this diagram can be performed in any order, or in any
combination or sub-combination.
[0032] At 302, a user generates a document on a device. The document might
be, for example, text that is input with a device's keyboard (block 304), an image
captured with a device's camera (block 306), or audio captured with the device's
microphone (block 308). After the document is generated on the device, the document
may be sent t o a reaction service 118 (at 310). As described above, reaction service 118
may use components such as a text search engine, an image comparator, an audio
comparator, etc., in order t o produce an appropriate reaction t o the document. Once
reaction service 118 reacts t o the document, reaction service 118 generates and returns
annotations t o the document (at 312). As described above, the annotations may
comprise search results, an identification of an image or a sound, or any other
information that is generated in response to the document.
[0033] At 314, the document may be combined with its annotation t o produce
an annotated document. For example, a set of search results may be attached t o the
query that generated those results. Or, if the document that was sent to the reaction
service was an image, then an identification of an object shown in the image may be an
annotation, and this identification may be associated with the image. In a sense, the
annotations are a type of metadata that described the document. In one example, a user
may be given the option t o decide which of the annotations returned by reaction service
118 are t o be attached t o the document as metadata (at 316). For example, if the user
takes a picture of the Fremont Troll statue in Seattle and reaction service 118 identifies
the object in the picture as the Fremont Troll, the user could be asked if he or she wants
t o label the image as "Fremont Troll." If so, then that label effectively becomes a form of
metadata that is attached t o the image.
[0034] At 318, the annotated document may be stored in a way that associates
the document with the user who created the document. For example, if a user takes a
picture, the picture may be stored in one of the user's online photo albums in a photosharing
service, or may be posted t o the user's profile on a social network. At 320, the
annotated document (or some part of the annotated document) may be propagated t o
other users, who may be at a location remote from the device at which the annotated
document was created. For example, the document and/or its annotations (or some of its
annotations, or some information derived from the document or its annotations) may be
posted on a social network (block 322), sent t o other users via e-mail (block 324), or
posted on a content-sharing site such as a photo-sharing site or blogging site (block 326).
It is noted that one aspect of content that is viewed as social media is that the content
tends t o be associated with a user (rather than anonymous like a typical search query),
and tends to be communicated t o other users (rather than kept solely in the user's
private storage). In this sense, the process described in FIG. 3 may facilitate the creation
of social media through connected, handheld devices.
[0035] FIGS. 4-6 show some example scenarios in which social media may be
created on a device.
[0036] FIG. 4 shows an example scenario in which a user performs a search t o
find a restaurant. User 202 is in a city at the location indicated by the star. User 202 is
near the intersection of First and Main Streets. Businesses and other establishments in
user 202's vicinity include post office 402, courthouse 404, pizza restaurant 406,
Moroccan restaurant 408, hotel 410, grocery store 412, and library 414. User 202 carries
device 102, which may be a smart phone, handheld computer, music player, etc. User
202 wants t o find a nearby Moroccan restaurant, so user 202 uses keyboard 106 t o enter
the query "moroccan food" into his device. For example, user 202 may use the client
application 132 (shown in FIG. 1) t o enter a query to be transmitted to reaction service
118. Or user 202 may use a browser to visit the web site of reaction service 118, in which
case user 202 enters the query into that web site.
[0037] After user 202 enters the query, various information may be
transmitted t o reaction service 118. This information may include the query itself (block
416), the latitude and longitude at which the user was located when the query was made
(block 418), and the time at which the query was made (block 420). Device 102 may be
equipped with some ability t o identify its own location (e.g., components that triangulate
device 102's location based on its position relative t o cellular towers, or a Global
Positioning System (GPS) that determines device 102's location based on signals from
satellites). These components may provide the information contained in block 418.
Moreover, device 102 may have a clock, and the time information in block 420 may be
derived from this clock.
[0038] Reaction service 118 receives the various pieces of information in
blocks 416-420, and reacts to that information. For example, based on the query in block
416, reaction service 118 knows that user 202 is looking for Moroccan food. Based on the
location information in block 418, reaction service 118 knows that user 202 is in Seattle.
And, based on the time information in block 420, reaction service 118 knows that user
202 is probably looking for dinner. Based on these pieces of information, reaction service
returns some information t o device 102. This information is shown in block 422, which
contains the name and address of Marrakesh Restaurant - i.e., the Moroccan restaurant
408 that is near user 202. The information shown in block 422 constitutes a type of
annotation t o the information that service 118 received from device 102.
[0039] Based on the annotation provided, an annotated document 424 may be
created. Annotated document contains the original information 426 that was transmitted
t o reaction service 118 ("Moroccan food, Seattle, 6:37 p.m."), and also contains reaction
service 118's response 428 ("Marrakesh restaurant"). Additionally, the annotated
document may contain a draft 430 of a social-networking-style post ("Tim is eating at
Marrakesh in Seattle on Thursday evening."). This post can be posted to a social network
432. For example, client application 132 (shown in FIG. 1) may display draft 430, and may
ask user 202 if he would like t o post the text in that draft t o a social network. If so, that
application may make the post on behalf of user 202.
[0040] FIG. 5 shows an example in which the document created by user 202 is
an image. In this example, user 202 is carrying device 102, which includes camera 110.
User 202 sees sculpture 502, and takes a photo of it. In this example, sculpture 502 is the
"Oval with Points" sculpture located on the Princeton University campus. Once the photo
has been taken, device 102 transmits the photo t o reaction service 118. The transmission
of this photo to reaction service 118 may be facilitated by client application 132 (shown
in FIG. 1). For example, after the photo has been taken, the photo may appear on the
screen of device 102, and a button may appear over the photo that invites user 202 t o
transmit the photo t o reaction service 118. If the user clicks the button, then the photo
504 may be transmitted to reaction service 118. Other information may also be
transmitted. For example, the location 506 at which the photo was taken may also be
transmitted.
[0041] Reaction service 118 reacts t o the information it received by trying t o
identify the object in the photo. For example, reaction service 118 may have an indexed
database of photos, and may attempt t o compare what is shown in the photo with
photos in its database. Additionally, reaction service 118 may have some model of what
objects are located at particular geographic locations, and thus reaction service 118 may
use location 506 t o attempt t o identify the object in the photo. Based on the information
provided t o reaction service 118, reaction service 118 may determine that the object in
the photo is the "Oval with Points" sculpture. Thus, reaction service 118 provides an
annotation 508 containing this information.
[0042] Once the annotation has been provided, an annotated document 510
may be created. This annotated document may include the original document (i.e., photo
504 of sculpture 502) and annotation 508. The annotated document may also contain
other information pertaining t o the photo, such as the date, time, and place at which the
photo was taken (block 512). Additionally, the annotated document may contain a draft
of a social network post (block 514) ("Tim found the Oval with Points' sculpture."). The
information contained in annotated document 510 may be used in various ways. For
example, user 202 may subscribe t o a photo-sharing service 516, and the photo and
some of its annotations may be posted to an album in that service. Thus, user 202 may
have an album called "Tim's trip t o New Jersey". The photo, along with labels identifying
what is in the photo, and where and when the photo was taken (which are all examples
of metadata), may be posted to that album. As another example, the draft network post
(block 514) may be posted to social network 432. The posting of information t o an album
and/or a social network may be performed by an application on device 102 (e.g., client
application 132, shown in FIG. 1).
[0043] FIG. 6 shows an example in which the document created by user 202 is
an audio capture. In this example, user 202 is in coffee house 602. User 202 carries
device 102, which is equipped with microphone 108. Coffee house 602 has a speaker
604, which is playing a particular song. User 202 wants t o know what the song is, so user
202 uses microphone 108 on device 102 t o capture the sound coming from the speaker.
A document 606 containing this captured audio is created, and is transmitted t o reaction
service 118.
[0044] Reaction service 118 reacts t o document 606 by comparing the audio in
that document with its own database. Based on this comparison, reaction service
determines that the song contained in the audio document is "Rhapsody in Blue." Thus,
reaction service returns annotations to that document. One annotation is the name 608
of the song. Another annotation is a link 610 t o the song at an online music store, which
may be used t o purchase the song.
[0045] After the annotations are returned, an annotated document 612 may
be created. Annotated document 612 may contain the document 606 that contains the
captured audio, the name 608 of the song contained in the audio document, and the link
610 t o a purchasable version of the song. Additionally, annotated document 612 may
contain a draft 614 of a social-network-style post concerning the fact that user 202 heard
the song "Rhapsody in Blue."
[0046] User 202 may then take various actions with respect t o the items in
annotated document 612. For example, user 202 may follow link 610 in order t o
purchase a commercially-available version of "Rhapsody in Blue" from online music store
616. If user 202 does purchase the song, then the purchased version of the song 618 may
become another annotation t o the audio clip that user 202 captured. Additionally, that
song may be placed in user 202's music library 620. Since the time at which user 202
captured the audio clip may be known (e.g., device 102 may be equipped with a clock,
and may have recorded the time at which user 202 captured the audio clip), this fact can
be stored in music library 620 as a type of annotation to the song. For example, the text
"First heard at the coffee house on 2/19/2010" (block 622) could be stored along with
the purchased version of the song 618. As another example, the draft 614 of a social
network post could be posted t o social network 432.
[0047] FIG. 7 shows an example environment in which aspects of the subject
matter described herein may be deployed.
[0048] Computer 700 includes one or more processors 702 and one or more
data remembrance components 704. Processor(s) 702 are typically microprocessors,
such as those found in a personal desktop or laptop computer, a server, a handheld
computer, or another kind of computing device. Data remembrance component(s) 704
are components that are capable of storing data for either the short or long term.
Examples of data remembrance component(s) 704 include hard disks, removable disks
(including optical and magnetic disks), volatile and non-volatile random-access memory
(RAM), read-only memory (ROM), flash memory, magnetic tape, etc. Data remembrance
component(s) are examples of computer-readable storage media. Computer 700 may
comprise, or be associated with, display 712, which may be a cathode ray tube (CRT)
monitor, a liquid crystal display (LCD) monitor, or any other type of monitor.
[0049] Software may be stored in the data remembrance component(s) 704,
and may execute on the one or more processor(s) 702. An example of such software is
social media creation software 706, which may implement some or all of the functionality
described above in connection with FIGS. 1-6, although any type of software could be
used. Software 706 may be implemented, for example, through one or more
components, which may be components in a distributed system, separate files, separate
functions, separate objects, separate lines of code, etc. A computer (e.g., personal
computer, server computer, handheld computer, etc.) in which a program is stored on
hard disk, loaded into RAM, and executed on the computer's processor(s) typifies the
scenario depicted in FIG. 6, although the subject matter described herein is not limited t o
this example.
[0050] The subject matter described herein can be implemented as software
that is stored in one or more of the data remembrance component(s) 704 and that
executes on one or more of the processor(s) 702. As another example, the subject matter
can be implemented as instructions that are stored on one or more computer-readable
storage media. Tangible media, such as an optical disks or magnetic disks, are examples
of storage media . The instructions may exist on non-tra nsitory media . Such instructions,
when executed by a com puter or other machine, may cause the com puter or other
machi ne to perform one or more acts of a method. The instructions to perform the acts
could be stored on one medi um, or could be spread out across plura l media, so that the
instructions might appea r collectively on the one or more com puter-reada ble storage
media, rega rdless of whether all of the instructions happen to be on the same mediu m.
[0051] Additiona lly, any acts descri bed herei n (whether or not shown in a
diagram) may be performed by a processor (e.g., one or more of processors 702) as part
of a method . Thus, if the acts A, B, and Care descri bed herei n, then a method may be
performed that com prises the acts of A, B, and C. Moreover, if the acts of A, B, and Care
descri bed herei n, then a method may be performed that com prises usi ng a processor to
perform the acts of A, B, and C.
[0052] In one example envi ronment, com puter 700 may be communicatively
con nected to one or more other devices t hrough network 708. Computer 710, which may
be simi la r in structure to com puter 700, is an example of a device that can be connected
to com puter 700, although other types of devices may also be so con nected.
[0053] Although the subject matter has been described in la nguage specific to
structura l features and/or methodologica l acts, it is to be understood that the su bject
matter defi ned in the appended claims is not necessa ri ly limited to the specific features
or acts descri bed above. Rather, the specific features and acts descri bed above are
disclosed as example forms of implementing the claims.
CLAIMS
1. A method of facilitating communication of information, the method
comprising:
receiving first information through an input mechanism of a device;
sending, to a service that is remote from said device, second information
that comprises said first information;
receiving, from said service, one or more items of third information,
wherein said service creates said items of third information in reaction to said second
information;
creating an annotated document based on said first information and said
items of third information; and
propagating, to one or more people, fourth information that is based on
said annotated document and on an identity of a user of said device.
2. The method of claim 1, further comprising:
receiving, from said user, an indication of which ones of said one or more
items of third information are to be attached to said first information as metadata.
3. The method of claim 1, wherein said first information comprises text that is
received through a user input mechanism of said device, wherein said service performs a
search based on said second information, wherein said third information comprises
results of said search, and wherein the method further comprises:
creating a draft social network posting based on said third information,
wherein said fourth information comprises said draft social network posting; and
receiving, from said user, an indication that said draft social network
posting is to be posted to a social network, wherein said propagating said fourth
information to one or more people comprises posting said draft social network posting
on said social network.
4. The method of claim 1, wherein said first information comprises an image
captured by a camera of said device, wherein said third information comprises an
identification of a person or object that appears in said image, and wherein said fourth
information comprises said image and said identification.
5. The method of claim 1, wherein said first information comprises a sound
recording captured by a microphone of said device, wherein said third information
comprises an identification of a sound contained in said sound recording, and wherein
said fourth information comprises said identification.
6. The method of claim 5, further comprising:
taking action to acquire a commercially-available recording of an object
identified in said sound recording.
7. The method of claim 1, wherein said device comprises a communication device
that communicates with said service through a cellular network.
8. A computer-readable medium having computer-executable instructions to
perform the method of any of claims 1-7.
9. A device for collecting and propagating information, wherein the device
comprises:
a content input mechanism through which said device receives content;
a memory;
a processor; and
a client application that is stored in said memory and that executes on said
processor, wherein said client application, upon instruction from a first user of the
device, sends, to a service that is located remotely from said device and with which the
device communicates through a network, content received through said content input
mechanism, and wherein said client application receives from the service one or more
annotations of said content, combines said content and a first one of said one or more
annotations to create an annotated document, creates information based on said
annotated document, and propagates said information to one or more second users.
10. The device of claim 9, wherein said client application receives, from said first
user, an indication of which of said one or more annotations are to be attached to said
content as metadata.
11. The device of claim 9, wherein said content input mechanism comprises a text
input mechanism, wherein said content comprises a text query, wherein said one or
more annotations comprise search results generated in response to said query, and
wherein said information is based on said results.
12. The device of claim 11, wherein said client application propagates said
information by posting, to a social network, a post that said client application creates
based on said content and on said results.
13. The device of claim 9, wherein said content input mechanism comprises a
camera, wherein said content comprises an image, wherein said one or more
annotations comprise an identification of a person or object that appears in said image,
and wherein said client application propagates said information by posting said image
and said identification to an online photo album associated with said first user.
14. The device of claim 9, wherein said content input mechanism comprises a
microphone, wherein said content comprises a sound recording, wherein said one or
more annotations comprise an identification of a song in said sound recording, and
wherein said client application propagates information by communicating, to said one or
more second users, information concerning said song.
15. The device of claim 9, wherein client application communicates, to said
service, a time at which said content was received and a geographic location at which
said device was located at said time, wherein said one or more annotations are based on
said content, said time, and said geographic location.

Documents

Orders

Section Controller Decision Date

Application Documents

# Name Date
1 7606-CHENP-2012 POWER OF ATTORNEY 03-09-2012.pdf 2012-09-03
1 7606-CHENP-2012-IntimationOfGrant21-03-2023.pdf 2023-03-21
2 7606-CHENP-2012 FORM-5 03-09-2012.pdf 2012-09-03
2 7606-CHENP-2012-PatentCertificate21-03-2023.pdf 2023-03-21
3 7606-CHENP-2012-Response to office action [21-03-2023(online)].pdf 2023-03-21
3 7606-CHENP-2012 FORM-3 03-09-2012.pdf 2012-09-03
4 7606-CHENP-2012-Written submissions and relevant documents [28-09-2022(online)].pdf 2022-09-28
4 7606-CHENP-2012 FORM-2 FIRST PAGE 03-09-2012.pdf 2012-09-03
5 7606-CHENP-2012-Correspondence to notify the Controller [18-08-2022(online)].pdf 2022-08-18
5 7606-CHENP-2012 FORM-1 03-09-2012.pdf 2012-09-03
6 7606-CHENP-2012-US(14)-HearingNotice-(HearingDate-13-09-2022).pdf 2022-08-17
6 7606-CHENP-2012 DRAWINGS 03-09-2012.pdf 2012-09-03
7 7606-CHENP-2012-ABSTRACT [18-12-2019(online)].pdf 2019-12-18
7 7606-CHENP-2012 DESCRIPTION (COMPLETE) 03-09-2012.pdf 2012-09-03
8 7606-CHENP-2012-CLAIMS [18-12-2019(online)].pdf 2019-12-18
8 7606-CHENP-2012 CORRESPONDENCE OTHERS 03-09-2012.pdf 2012-09-03
9 7606-CHENP-2012 CLAIMS SIGNATURE LAST PAGE 03-09-2012.pdf 2012-09-03
9 7606-CHENP-2012-DRAWING [18-12-2019(online)].pdf 2019-12-18
10 7606-CHENP-2012 CLAIMS 03-09-2012.pdf 2012-09-03
10 7606-CHENP-2012-FER_SER_REPLY [18-12-2019(online)].pdf 2019-12-18
11 7606-CHENP-2012 PCT PUBLICATION 03-09-2012.pdf 2012-09-03
11 7606-CHENP-2012-Information under section 8(2) (MANDATORY) [18-12-2019(online)].pdf 2019-12-18
12 7606-CHENP-2012-OTHERS [18-12-2019(online)].pdf 2019-12-18
12 7606-CHENP-2012.pdf 2012-09-04
13 7606-CHENP-2012 FORM-3 28-02-2013.pdf 2013-02-28
13 7606-CHENP-2012-Annexure [16-12-2019(online)].pdf 2019-12-16
14 7606-CHENP-2012 CORRESPONDENCE OTHERS 28-02-2013.pdf 2013-02-28
14 7606-CHENP-2012-FORM 3 [16-12-2019(online)].pdf 2019-12-16
15 7606-CHENP-2012-PETITION UNDER RULE 137 [16-12-2019(online)].pdf 2019-12-16
15 Form-18(Online).pdf 2014-03-17
16 7606-CHENP-2012 FORM-6 25-02-2015.pdf 2015-02-25
16 7606-CHENP-2012-FORM 4(ii) [11-12-2019(online)].pdf 2019-12-11
17 MTL-GPOA - KONPAL.pdf ONLINE 2015-03-03
17 7606-CHENP-2012-FER.pdf 2019-06-12
18 FORM-6-1701-1800(KONPAL).87.pdf 2015-03-13
18 MS to MTL Assignment.pdf ONLINE 2015-03-03
19 FORM-6-1701-1800(KONPAL).87.pdf ONLINE 2015-03-03
19 MS to MTL Assignment.pdf 2015-03-13
20 MTL-GPOA - KONPAL.pdf 2015-03-13
21 FORM-6-1701-1800(KONPAL).87.pdf ONLINE 2015-03-03
21 MS to MTL Assignment.pdf 2015-03-13
22 FORM-6-1701-1800(KONPAL).87.pdf 2015-03-13
22 MS to MTL Assignment.pdf ONLINE 2015-03-03
23 7606-CHENP-2012-FER.pdf 2019-06-12
23 MTL-GPOA - KONPAL.pdf ONLINE 2015-03-03
24 7606-CHENP-2012-FORM 4(ii) [11-12-2019(online)].pdf 2019-12-11
24 7606-CHENP-2012 FORM-6 25-02-2015.pdf 2015-02-25
25 Form-18(Online).pdf 2014-03-17
25 7606-CHENP-2012-PETITION UNDER RULE 137 [16-12-2019(online)].pdf 2019-12-16
26 7606-CHENP-2012 CORRESPONDENCE OTHERS 28-02-2013.pdf 2013-02-28
26 7606-CHENP-2012-FORM 3 [16-12-2019(online)].pdf 2019-12-16
27 7606-CHENP-2012 FORM-3 28-02-2013.pdf 2013-02-28
27 7606-CHENP-2012-Annexure [16-12-2019(online)].pdf 2019-12-16
28 7606-CHENP-2012-OTHERS [18-12-2019(online)].pdf 2019-12-18
28 7606-CHENP-2012.pdf 2012-09-04
29 7606-CHENP-2012 PCT PUBLICATION 03-09-2012.pdf 2012-09-03
29 7606-CHENP-2012-Information under section 8(2) (MANDATORY) [18-12-2019(online)].pdf 2019-12-18
30 7606-CHENP-2012 CLAIMS 03-09-2012.pdf 2012-09-03
30 7606-CHENP-2012-FER_SER_REPLY [18-12-2019(online)].pdf 2019-12-18
31 7606-CHENP-2012 CLAIMS SIGNATURE LAST PAGE 03-09-2012.pdf 2012-09-03
31 7606-CHENP-2012-DRAWING [18-12-2019(online)].pdf 2019-12-18
32 7606-CHENP-2012 CORRESPONDENCE OTHERS 03-09-2012.pdf 2012-09-03
32 7606-CHENP-2012-CLAIMS [18-12-2019(online)].pdf 2019-12-18
33 7606-CHENP-2012 DESCRIPTION (COMPLETE) 03-09-2012.pdf 2012-09-03
33 7606-CHENP-2012-ABSTRACT [18-12-2019(online)].pdf 2019-12-18
34 7606-CHENP-2012 DRAWINGS 03-09-2012.pdf 2012-09-03
34 7606-CHENP-2012-US(14)-HearingNotice-(HearingDate-13-09-2022).pdf 2022-08-17
35 7606-CHENP-2012 FORM-1 03-09-2012.pdf 2012-09-03
35 7606-CHENP-2012-Correspondence to notify the Controller [18-08-2022(online)].pdf 2022-08-18
36 7606-CHENP-2012 FORM-2 FIRST PAGE 03-09-2012.pdf 2012-09-03
36 7606-CHENP-2012-Written submissions and relevant documents [28-09-2022(online)].pdf 2022-09-28
37 7606-CHENP-2012-Response to office action [21-03-2023(online)].pdf 2023-03-21
37 7606-CHENP-2012 FORM-3 03-09-2012.pdf 2012-09-03
38 7606-CHENP-2012-PatentCertificate21-03-2023.pdf 2023-03-21
38 7606-CHENP-2012 FORM-5 03-09-2012.pdf 2012-09-03
39 7606-CHENP-2012-IntimationOfGrant21-03-2023.pdf 2023-03-21
39 7606-CHENP-2012 POWER OF ATTORNEY 03-09-2012.pdf 2012-09-03
40 7606-CHENP-2012-FORM-27 [11-09-2025(online)].pdf 2025-09-11

Search Strategy

1 2019-03-1812-13-10_18-03-2019.pdf
1 searchstrategy_18-03-2019.pdf
2 NPL01AE_19-06-2020.pdf
2 searchstartegyAE_19-06-2020.pdf
3 NPL02AE_19-06-2020.pdf
3 NPL03AE_19-06-2020.pdf
4 NPL02AE_19-06-2020.pdf
4 NPL03AE_19-06-2020.pdf
5 NPL01AE_19-06-2020.pdf
5 searchstartegyAE_19-06-2020.pdf
6 2019-03-1812-13-10_18-03-2019.pdf
6 searchstrategy_18-03-2019.pdf

ERegister / Renewals

3rd: 18 May 2023

From 30/03/2013 - To 30/03/2014

4th: 18 May 2023

From 30/03/2014 - To 30/03/2015

5th: 18 May 2023

From 30/03/2015 - To 30/03/2016

6th: 18 May 2023

From 30/03/2016 - To 30/03/2017

7th: 18 May 2023

From 30/03/2017 - To 30/03/2018

8th: 18 May 2023

From 30/03/2018 - To 30/03/2019

9th: 18 May 2023

From 30/03/2019 - To 30/03/2020

10th: 18 May 2023

From 30/03/2020 - To 30/03/2021

11th: 18 May 2023

From 30/03/2021 - To 30/03/2022

12th: 18 May 2023

From 30/03/2022 - To 30/03/2023

13th: 18 May 2023

From 30/03/2023 - To 30/03/2024

14th: 26 Mar 2024

From 30/03/2024 - To 30/03/2025