Abstract: The present subject matter discloses system(s) and method(s) for broadcasting of subtitles in a stream and decoding thereof. Descriptors corresponding to subtitles to be broadcasted are generated. The descriptors capture mapping of the subtitles with the user preferences and location of the user. The descriptors are embedded in PSI tables. A multiplexed stream of a video, an audio, the subtitles and PSI tables is broadcasted which is received by a user device. The user device compares device identifier information preconfigured in a memory of the user device with descriptor-specific identifier information present in the descriptors embedded in the PSI tables to identify a subtitle descriptor relevant to the user device. Finally, a subtitle corresponding to the subtitle descriptor identified as relevant is displayed on a screen of the user device.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
[001] The present application does not claim priority from any patent application.
TECHNICAL FIELD
[002] The present subject matter described herein, in general, relates to methods, systems and devices for broadcasting of subtitles in a service/program stream and decoding thereof.
BACKGROUND
[003] Subtitles assist the end viewer to watch movies or video in non-native language. The subtitles are generated as a graphical bitmap and then encoded and compressed into the DVB delivery format. Any subtitle is multiplexed with a video and an audio and thereafter transmitted as part of the output transport stream to the viewer‟s device or a decoder. Once the user device or the decoder receives the subtitle along with the video and the audio, the subtitle is reconstructed in the user device‟s or the decoder's memory. A user can select whether to display subtitles or not and, if available, which one of several language. At the appropriate time the decoder will then display the subtitle on-screen. Because the DVB multiplex can carry multiple subtitle streams multiple language can be supported. ISO/IEC 13818-1 specifies SI which is referred to as PSI. The PSI data provides information to enables automatic configuration of the receiver to demultiplex and decode the various streams of programs within the multiplex.
[004] In the existing broadcasting scenario, the above configuration of the subtitles is based on basis of language of a particular region/location. For example, the subtitles broadcasted in the northern part of India will by default in Hindi. Similarly, the subtitles configured for the southern part may be in a native language of each respective southern state. Further, the existing broadcasting systems may support provision of subtitles in multiple languages and prompt the end viewer to select a subtitle in a language preferred to the end viewer. In order to broadcast two different subtitles, the broadcasting systems have to broadcast different streams, each stream including respective subtitle corresponding to a respective region/location .Furthermore, the content of the subtitle is not based on the user/personal preferences and the current location of the end viewer. For example, the end viewer may be provided with the promotional/advertisement schemes in form of subtitles
3
which may not be contextually relevant to the end viewer/subscriber or the current location of the end viewer/subscriber.
SUMMARY
[005] This summary is provided to introduce aspects related to systems, devices and methods for broadcasting of subtitles in a service/program stream (hereinafter also referred as “stream”) and decoding thereof. The devices, system and methods are further described below in the detailed description. This summary is not intended to identify essential features of subject matter nor is it intended for use in determining or limiting the scope of the subject matter.
[006] In one implementation, a broadcasting system for defining and broadcasting user preference based and location based subtitles in a service/program stream is disclosed. The broadcasting system may comprise a processing unit and a memory unit coupled to the processing unit. The processing unit may execute programmed instructions stored in the memory unit. The processing unit may execute an instruction to generate a plurality of user subtitles and a plurality of location subtitles associated with a program to be broadcasted by the broadcasting system. In one aspect, the plurality of user subtitles may be generated based upon user preferences associated to a plurality of users and the plurality of location subtitles may be generated based upon location of the plurality of users. The processing unit may further execute an instruction to map each user subtitle to the user preferences associated with one or more users of the plurality of users and each location subtitle to the location, including language of the location, associated with one or more users of the plurality of users. Further, the processing unit may execute an instruction to generate a user subtitle descriptor and a location subtitle descriptor corresponding to each user subtitle and each location subtitle respectively. In one aspect, each user subtitle descriptor captures the mapping of a user subtitle with the user preferences associated with at least one user and each location descriptor captures the mapping of a location subtitle with the location and language associated to at least one user. The processing unit may further execute an instruction to embed each user subtitle descriptor and each location subtitle descriptor in program specific information (PSI) table. Furthermore, the processing unit may execute an instruction to
4
broadcast a multiplexed stream of a video, an audio, the plurality of user subtitles, the plurality of location subtitles, and a plurality of PSI tables.
[007] In another implementation, a user device for identifying and decoding user preference based and location based subtitles in a service/program stream is disclosed. The user device may comprise a processor and a memory coupled to the processor. The processor may execute programmed instructions stored in the memory. The processor may execute an instruction to receive a stream comprising a plurality of PSI tables. In one aspect, the plurality of PSI tables may be associated to a plurality of subtitles contained in the stream. The processor may further execute an instruction to parse each PSI table to identify a plurality of user subtitle descriptors and a plurality of location subtitle descriptors. In one aspect, each user subtitle descriptor and each location subtitle descriptor may be associated to at least one of the plurality of subtitles. Further, the processor may execute an instruction to compare device identifier information preconfigured in a memory of the user device with descriptor-specific identifier information present in each user subtitle descriptor and each location subtitle descriptor. The processor may execute an instruction to identify at least one of a user subtitle descriptor and a location subtitle descriptor having the descriptor-specific identifier information matched with the device identifier information. Further, the processor may execute an instruction to display at least one of a user subtitle and a location subtitle, from the plurality of subtitles, corresponding to at least one of the user subtitle descriptor and a location subtitle descriptor respectively.
[008] In another implementation, a method for broadcasting user preference based and location based subtitles in a service/program stream is disclosed. The method may comprise generating a plurality of user subtitles and a plurality of location subtitles associated with a program to be broadcasted by the broadcasting system. In one aspect, the plurality of user subtitles may be generated based upon user preferences associated to a plurality of users and the plurality of location subtitles may be generated based upon location of the plurality of users. The method may further comprise mapping each user subtitle to the user preferences associated with one or more users of the plurality of users and each location subtitle to the location, including language of the location, associated with one or more users of the plurality of users. Further, the method may comprise generating a user subtitle descriptor and a location subtitle descriptor corresponding to each user subtitle and each location subtitle respectively. In one aspect, each user subtitle descriptor captures the mapping of a user subtitle with the user preferences associated with at least one user and each location
5
descriptor captures the mapping of a location subtitle with the location and language associated to at least one user. The method may further comprise embedding each user subtitle descriptor and each location subtitle descriptor in program specific information (PSI) table. Furthermore, the method may comprise broadcasting a multiplexed stream of a video, an audio, the plurality of user subtitles, the plurality of location subtitles, and a plurality of PSI tables. In an embodiment, the aforementioned method for defining and broadcasting the user preference based and the location based subtitles in the service/program stream may be performed by a processing unit using programmed instructions stored in a memory unit coupled with the processing unit.
[009] In another implementation, a method for identifying and decoding user preference based and location based subtitles in a service/program stream is disclosed. The method may comprise receiving a stream comprising a plurality of PSI tables, wherein the plurality of PSI tables are associated to a plurality of subtitles contained in the stream. The method may further comprise parsing each PSI table to identify a plurality of user subtitle descriptors and a plurality of location subtitle descriptors. In one aspect, each user subtitle descriptor and each location subtitle descriptor may be associated to at least one of the plurality of subtitles. Further, the method may comprise comparing device identifier information preconfigured in a memory of the user device with descriptor-specific identifier information present in each user subtitle descriptor and each location subtitle descriptor. The method may further comprise identifying at least one of a user subtitle descriptor and a location subtitle descriptor having the descriptor-specific identifier information matched with the device identifier information. Furthermore, the method may comprise displaying, by the processor, at least one of a user subtitle and a location subtitle, from the plurality of subtitles, corresponding to at least one of the user subtitle descriptor and a location subtitle descriptor respectively. In an embodiment, the aforementioned method for identifying and decoding the subtitles in the service/program stream may be performed by a processor using programmed instructions stored in a memory coupled with the processor.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in
6
which the reference number first appears. The same numbers are used throughout the drawings to refer like features and components.
[0011] Figure 1 illustrates an architecture of a broadcasting system, a multi service operator (MSO) and user device collectively facilitating broadcasting of subtitles in a service/program stream and decoding thereof, in accordance with an embodiment of the present subject matter.
[0012] Figures 2a and 2b illustrate a flow diagram depicting steps performed by the user device for decoding the subtitles in the service/program stream, in accordance with an embodiment of the present subject matter.
[0013] Figure 3 illustrates a method for broadcasting subtitles in a service/program stream, in accordance with an embodiment of the present subject matter.
[0014] Figure 4 illustrates a method for decoding subtitles in a service/program stream, in accordance with an embodiment of the present subject matter.
DETAILED DESCRIPTION
[0015] According to embodiments of the present disclosure, systems, devices and methods to facilitating broadcasting and/or decoding of location subtitles (location-based) and user subtitles (user preferences based), hereinafter referred as subtitles collectively, in service/program streams are disclosed. Along with the subtitles, audio and video, a broadcasting system may be configured to multiplex a user subtitle descriptor and a location subtitle descriptor corresponding to each user subtitle and each location subtitle respectively. Each user subtitle descriptor and each location subtitle descriptor may be embedded in Program Map Table (PMA), also referred program specific information (PSI) table, as defined in ISO/IEC 13818-1. Thus, the broadcasting system may multiplex the video, the audio, the subtitle and the PSI tables to generate a multiplexed stream which is transport to a remote user device via a communication network.
[0016] According to aspects of the present disclosure, a user device may be configured to decode the information (data or text) in the multiplexed stream based on the user and the location of the user. In an embodiment, the broadcasting system may send the multiplexed stream to the user device via a Multi Service Operator (MSO). In some embodiments, the MSO may filter the location subtitle descriptors and the user subtitle
7
descriptors by parsing the PSI tables and forward the subtitles intended for region specific to the MSO. It is to be noted that the user subtitle descriptor is used to send the subtitle to a particular user. The user subtitle descriptor is defined based on user selected preferences using feedback channel. The feedback channel may be one of an internet, a mobile app, a web app or an MSO form. The user may utilize one of the feedback channels in order to update the user preferences with the broadcasting system. The broadcasting system may use the latest updated user preferences to transmit the user preferences based subtitle to the user device. The user preferences based subtitle may be displayed on a screen associated to the user device of the particular user. The particular user may be identified based on a location code or an MSO code or a User Identifier. Similarly, the location based subtitle may be displayed on the screen of the user device based on the location of the user.
[0017] Further, in accordance with embodiments of the present disclosure, the devices and methods may consider the user/personal preferences in conjunction with the location of the user in order to transmit a subtitle relevant to both the user preferences and the location of the user. The user/personal preferences and the location data may be stored in a database of the broadcasting system. The user/personal preferences may include preferred language, preferred programs, preferred advertisements and the like. The user/personal preferences may be obtained using the feedback channels as disclosed above. Therefore, in this scenario, the user/personal preferences in conjunction with the location of the user may be taken into consideration while displaying the subtitles on the screen of the user device. It is to be understood that the user/personal preferences may be synchronized with the user devices associated with the respective users.
[0018] Referring to Figure 1, a system 100 containing multiple components collectively facilitating broadcasting of a user preference based and location based subtitles in a service/program stream and decoding thereof is illustrated, in accordance with an embodiment of the present subject matter. As shown, the system 100 comprises a broadcasting system 102, a Multi Service Operator System 104 and a user device 106. It may be understood that the broadcasting system 102 may be implemented in a variety of computing systems, such as a personal computer, a server, a network server, a tablet, a mobile phone, and the like. Further, it is to be understood that the user device 106 may include, but not limited to, a portable computer, a personal digital assistant, a handheld device, and a workstation. The user device 106 may be communicatively coupled to the broadcasting system 102 via a network 108.
8
[0019] In one implementation, the network 108 may be a wireless network, a wired network or a combination thereof. The network 108 can be implemented as one of the different types of networks, such as intranet, local area network (LAN), wide area network (WAN), the internet, and the like. The network 108 may either be a dedicated network or a shared network. The shared network represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), Wireless Application Protocol (WAP), and the like, to communicate with one another. Further, the network 108 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices, and the like.
[0020] Referring now to Figure 1, the broadcasting system 102 may include a processing unit 110 and a memory unit 112. Further, the user device 106 may include a processor 114, a memory 116 and an input/output (I/O) interface 118. The processing unit 110 may further include a multiplexer 120 as shown in figure 1. The processing unit 110 and the processor 114 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, and/or any devices that manipulate signals based on operational instructions. Among other capabilities, the processing unit 110 and the processor 114 may be configured to fetch and execute programmed instructions ( or computer-readable instructions) stored in the memory unit 112 and the memory 116 respectively
[0021] The I/O interface 118 of the user device 106 may include a variety of software and hardware interfaces, for example, a web interface, a graphical user interface, and the like. The I/O interface 118 may allow the user to interact with a user directly or through the user device 106. Further, the I/O interface 118 may enable the user device 106 to communicate with other computing devices, such as web servers and external data servers (not shown). The I/O interface 118 may facilitate multiple communications within a wide variety of networks and protocol types, including wired networks, for example, LAN, cable, etc., and wireless networks, such as WLAN, cellular, or satellite. The I/O interface 118 may include one or more ports for connecting a number of devices to one another or to another server.
[0022] The memory unit 112 and the memory 116 may include any computer-readable medium or computer program product known in the art including, for example, volatile memory, such as static random access memory (SRAM) and dynamic random access memory (DRAM), and/or non-volatile memory, such as read only memory (ROM), erasable
9
programmable ROM, flash memories, hard disks, optical disks, a compact disks (CDs), digital versatile disc or digital video disc (DVDs) and magnetic tapes. The memory unit 112 and the memory 116 may include programmed instructions 122 and programmed instructions 124 respectively as shown in figure 1. The programmed instructions 122 and the programmed instructions 124 may include routines, programs, objects, components, data structures, etc., which perform particular tasks or implement particular abstract data types.
[0023] In order to broadcast the subtitles, the processing unit 110 may initially be configured to generate a plurality of user subtitles and a plurality of location subtitles (collectively referred as subtitles 126 in figure 1) associated with a program. It is to be noted that each location subtitle may be generated based upon location of the plurality of users. In an embodiment, each location subtitle may be a promotional text or a scheme launched by an advertiser at a particular location. In one embodiment, the location subtitle may also be generated based upon language of the users in the particular location. Each user subtitle may be generated based upon personal/user preferences of the users. Thus, the user subtitles are generated for the users based upon the preferences selected by the users. The personal preferences may include, but not limited to, user preferred language, user preferred programs, and user preferred advertisements. The user preferences may be captured from the user devices via a reverse feedback channel, including but not limited to, a mobile application, a web application, an online or offline survey. As shown in figure 1, the user preferences 128 are captured corresponding to a user (not shown in figure 1) of the user device 106 and stored in the memory unit 112 of the broadcasting system 102. The user may use the I/O interface 118 to communicate with the broadcasting system 102 through the network 108 to submit the user preferences 128. In some embodiments, the user subtitles generated based on the user preferences may be initiated by the advertisers to provide users schemes/advertisements based upon the user preferences provided by the users. In an embodiment, both the user subtitles and the location subtitles (i.e. subtitles 126) may be generated based upon methodology disclosed in DVB standard namely ETSI EN 300 743. The subtitles 126 may be stored in the memory unit 112.
[0024] Now, referring to figure 1, once the subtitles 126 are generated and stored, the subtitles 126 are ready for streaming, by the processing unit 110, along with an audio 130 and video 132 present in the memory unit 112. However, before the streaming, each of the user subtitles and the locations are required to be mapped with the user preferences and the user locations respectively. More particularly, the processing unit 110, before streaming, may
10
map each user subtitle with the user preferences of one or more users of the plurality of users. Similarly, the processing unit 110 may map each location subtitle with location, including language of the location, associated one or more users of the plurality of users. In an embodiment, the aforementioned mapping may be captured by the processing unit 110 by generating user subtitle descriptors 134 and location subtitle descriptors 136 as shown in figure 1. According to aspects of the present disclosure, each user subtitle descriptor captures the mapping of a user subtitle with the user preferences associated with at least one user. Further, each location descriptor captures the mapping of a location subtitle with the location and language associated to at least one user.
[0025] In an embodiment, a location descriptor generated has a following structure as illustrated in Table 1:
Field
Description
location_descriptor_tag
The location_descriptor_tag is an 8-bit field identifying a location subtitle descriptor. The value of the location_descriptor_tag may be 0xA0. The value may be changed based on the requirements However, both the broadcasting system 102 and the user device 106 must have same value for the location_descriptor_tag.
location_descriptor_length
The location_descriptor_length is an 8-bit field specifying the total number of bytes of data portion of the location descriptor following a byte defining the value of the location_descriptor_length.
country_code
The country_code is a 24-bit field containing a three character code identifying a country of the location subtitle. Each character is coded into 8 bits according to ISO/IEC 8859-1 [5] and inserted in order into the 24-bit field. If the country_code is „xxx‟, it indicates that the location subtitle belongs to all countries. The country_code may be represented according to the ISO 3166-1 alpha 3, part of the ISO 3166 standard published by the ISO, to represent countries, dependent territories and special areas of geographical interest.
11
state_code
The state_code is a 16-bit field containing a two character state code of the state of the location subtitle. Each character is coded into 8 bits according to ISO/IEC 8859-1 [5] and inserted in order into the 16-bit field. If the state code is „xx‟, it indicates that the location subtitle belongs to all states in a particular country. The state_code may be represented as defined in ISO 3166-2.
city_code
The city_code is a 32-bit field containing the four character city code of the city of the location subtitle. Each character is coded into 8 bits according to ISO/IEC 8859-1[5] and inserted in order into the 32-bit field. If the city code is „xxxx‟, that means the subtitle belongs to all cities in a particular state and a country.
location_language_code
The location_language_code (an ISO_639_language_code) is a 24-bit field containing ISO 639-2 [3] three character language code indicating a language of the location subtitle. Both ISO 639-2/B and ISO 639-2/T may be used. Each character is coded into 8 bits according to ISO/IEC 8859-1 [5] and inserted in order into the 24-bit field.
location_subtitling_type
The location_subtitling_type is an 8 bit field that provides information on the content of each location subtitle and the intended display.
composition_page_id
The composition_page_id is a 16-bit field identifying the composition page. The DVB_subtitling_segments signaling the composition_page_id may be decoded if the previous data in the location subtitle descriptor matches the user's selection criteria
ancillary_page_id
The ancillary_page_id identifies the (optional) ancillary page. DVB_subtitling_segments signaling the ancillary_page_id may be decoded if the previous data in the location subtitle descriptor matches the user's selection criteria. The values in the ancillary_page_id and the composition_page_id fields may be same if no ancillary page is provided.
Table 1: A location subtitle descriptor structure
12
[0026] In an embodiment, a user descriptor generated has a following structure as illustrated in Table 2:
Field
Description
user_descriptor_tag
The user_descriptor_tag is an 8-bit field identifying a user subtitle descriptor. The value of the user_descriptor_tag may be 0xA1. The value may be changed based on the requirements However, both the broadcasting system 102 and the user device 106 must have same value for the location_descriptor_tag.
user_descriptor_length
The user _descriptor_length is an 8-bit field specifying the total number of bytes of data portion of the location descriptor following a byte defining the value of the user _descriptor_length.
user_country _code
The country_code is a 24-bit field containing a three character code identifying a country of the user subtitle. Each character is coded into 8 bits according to ISO/IEC 8859-1 [5] and inserted in order into the 24-bit field. If the country_code is „xxx‟, it indicates that the location subtitle belongs to all countries. The country_code may be represented according to the ISO 3166-1 alpha 3, part of the ISO 3166 standard published by the ISO, to represent countries, dependent territories and special areas of geographical interest.
user_state_code
The state_code is a 16-bit field containing a two character state code of the state of the user subtitle. Each character is coded into 8 bits according to ISO/IEC 8859-1 [5] and inserted in order into the 16-bit field. If the state code is „xx‟, it indicates that the location subtitle belongs to all states in a particular country. The state_code may be represented as defined in ISO 3166-2.
user_city_code
The city_code is a 32-bit field containing the four character city code of the city of the user subtitle. Each character is coded into 8 bits according to ISO/IEC 8859-1[5] and inserted in order into
13
the 32-bit field. If the city code is „xxxx‟, that means the subtitle belongs to all cities in a particular state and a country.
user_language_code
The user_language_code (again an ISO_639_language_code) is a 24-bit field containing ISO 639-2 [3] three character language code indicating a language of the user subtitle. Both ISO 639-2/B and ISO 639-2/T may be used. Each character is coded into 8 bits according to ISO/IEC 8859-1 [5] and inserted in order into the 24-bit field.
operator code
The operator code or an MSO code is a bit field defined by the broadcasting system 102 to uniquely identify an operator or a multi service operator (MSO).
user_id
The user_id is a user access card id, assigned by the MSO to uniquely the respective user. If the user_id is not defined, then the user subtitle belongs to all users of a particular MSO for a particular location.
user_id_length
The user_id_length is an 8-bit field defining the length of the user_id.
user_subtitling_type
The user_subtitling_type is an 8 bit field that provides information on the content of the user subtitle and the intended display.
user_composition_page_id
The user_composition_page_id is a 16-bit field identifying the composition page. The DVB_subtitling_segments signaling the composition_page_id may be decoded if the previous data in the user subtitle descriptor matches the user's selection criteria
user_ancillary_page_id
The user_ancillary_page_id identifies the (optional) ancillary page. DVB_subtitling_segments signaling the ancillary_page_id may be decoded if the previous data in the user subtitle descriptor matches the user's selection criteria. The values in the user_ancillary_page_id and the user_composition_page_id fields may be same if no ancillary page is provided.
Table 2: A user subtitle descriptor structure
14
[0027] It must be understood that the country_code, the state_code, and the city_code in each location subtitle descriptor and the user_country code, the user_state_code, the user_city_code, the operator code (MSO Code) and the user_id in each user subtitle descriptor forms a descriptor-specific identifier information for the respective location subtitle descriptor or the user subtitle descriptor.
[0028] After the generation of the user subtitle descriptors 134 and the location subtitle descriptors 136, the processing unit 110 may embed each of the user subtitle descriptors 134 and each of the location subtitle descriptors 136 in program specific information (PSI) tables 138 as shown in figure 1. The PSI tables 138 indicate program map tables defined in accordance with ISO/IEC 13818-1 standard. Therefore, the user subtitle descriptors 134 and the location subtitle descriptors 136 forms a part of the program specific information (PSI) tables 138 or the program map tables. Thereafter, the processing unit 110, via the multiplexer 120, multiplexes the audio 130, the video 132, the subtitles 126 and the PSI tables 138 to generate a multiplexed stream. The multiplexed stream of the audio 130, the video 132, the subtitles 126 and the PSI tables 138 may then be broadcasted by the processing unit 110 to the user device 106 via the MSO 104.
[0029] In an embodiment, the processor 114 of the user device 106 may, via the MSO, receive the multiplexed stream of the audio 130, the video 132, the subtitles 126 and the PSI tables 138. It is to be noted that the PSI tables 138, as explained above, are associated with the subtitles 126 in the multiplexed stream. The processor 114 may parse each of the PSI tables 138 in order to identify the plurality of user subtitle descriptors 134 and the plurality of location subtitle descriptors 136 embedded in the PSI tables 138. It must be understood that each of the user subtitle descriptors 134 and each of the location subtitle descriptors 136 are associated to at least one of the subtitles 126 in the multiplexed stream.
[0030] After the identification of the plurality of user subtitle descriptors 134 and the plurality of location subtitle descriptors 136, the processor 114 may compare device identifier information 140 preconfigured in the memory 116 of the user device 106 with descriptor-specific identifier information present in each of the user subtitle descriptors 134 and each of the location subtitle descriptor 136. In an embodiment, the device identifier information 140 may include, but not limited to, a user identifier, an operator identifier, a device identifier, an access identifier, and a device location identifier. The device location identifier may include at least one of device country code, a device state code, and a device city code. Further, the descriptor-specific identifier information for each of the location subtitle descriptor includes
15
at least one of the country_code, the state_code, and the city_code as described above. Similarly, the descriptor-specific identifier information, as described above, for each of the user subtitle descriptor includes at least one of the user_country code, the user_state_code, the user_city_code, the operator code (MSO Code) and the user_id as described above.
[0031] Based upon the comparison of the device identifier information 140 with the descriptor-specific identifier information, the processor 114 may identify at least one of a user subtitle descriptor and a location subtitle descriptor having the descriptor-specific identifier information matched with the device identifier information. In one example, the processor 114 may compare the user identifier or the access identifier or the device identifier with the user_id in each of the user subtitle descriptors 134 in order to identify the user subtitle descriptor having the user_id matched with the user identifier or the access identifier or the device identifier. In accordance with the aspects of the disclosure, either of the user identifier or the access identifier or the device identifier may be equivalent to that of the user_id. In another example, the processor 114 may compare the device identifier like the device country code or the device state code or the device city code with the country_code, the state_code and the city_code, respectively, in each of the location subtitle descriptors 136 in order to identify the location subtitle descriptor having the country_code or the state_code or the city_code matched with the device country code, the device state code and the device city code, respectively.
[0032] After the identification of the at least one of the user subtitle descriptor and the location subtitle descriptor, the processor 114 may identify a subtitle set 142 from the subtitles 126 in the multiplexed stream, associated to the at least one of the user subtitle descriptor and the location subtitle descriptor. More particularly, the subtitle set 142 comprises at least one of a user subtitle and a location subtitle corresponding to at least one of the user subtitle descriptor and the location subtitle descriptor. The subtitle set 142 may be stored in the memory 116 of the user device 106. Further, the processor 114 may display each of the subtitles in the subtitle set 142 to the user via the I/O interface 118. Figures 2a and 2b illustrates a flow diagram depicting steps implemented by the processor 114 for decoding and thereby displaying the subtitle set 142 to the user.
[0033] Referring to figure 2a, at step 202, the processor 114 parses the PMT table in order to identify the user subtitle descriptor and the location subtitle descriptor. At step 204, the processor 114 checks whether the user subtitle descriptor is present in the PMT table. If the user subtitle descriptor is detected to be present, at step 206, the processor 114 checks the
16
user_id in the user subtitle descriptor as-well-as the user identifier (User ID) of the user device 106. At step 208, the processor 114 verifies whether the user_id in the user subtitle descriptor matches with the User ID of the user device 106. If the user_id in the user subtitle descriptor matches with the User ID of the user device 106, then at step 236, the processor 114 identifies a subtitle corresponding to the user subtitle descriptor and save the subtitle in the memory 116 of the user device 106. Thereafter, the processor 114 implements steps (238 to 248) mentioned in figure 2b as described later in the subsequent paragraphs.
[0034] Now, again referring to figure 2a, at step 204 if it is determined that the user subtitle descriptor is not present or at step 208 if it is determined that the user_id in the user subtitle descriptor does not match with the User ID of the user device 106, then at step 210, the processor 114 checks whether the user subtitle descriptor without an user-id is defined. If the user subtitle descriptor without a user_id is defined, then at step 212, the processor 114 checks for the operator code (MSO Code) in the user descriptor and the operator identifier (MSOID) in the user device 106. At step 214, the processor 114 verifies whether the operator code (MSO Code) in the user descriptor matches with the operator identifier (MSOID) in the user device 106. If the MSO Code in the user subtitle descriptor matches with the MSOID of the user device, then at step 236, the processor 114 identifies a subtitle corresponding to the user subtitle descriptor and save the subtitle in the memory 116 of the user device 106. Thereafter, the processor 114 implements steps (238 to 248) mentioned in figure 2b as described later in the subsequent paragraphs.
[0035] Now, again referring to figure 2a, at step 210, if it is determined that the user subtitle descriptor without user_id is not defined or at step 214 if it is determined that the MSO Code in the user subtitle descriptor does not match with the MSOID of the user device, then at step 216, the processor 114 checks whether a location subtitle descriptor with a country, state and city is defined. If the location subtitle descriptor with the country, the state and city is defined, then at step 218, the processor 114 checks for a device location configured in the user device 106. More particularly, at step 218, the processor checks the device country code, the device state code and the device city code. At step 220, the processor 114 verifies whether the country, the state and the city defined in the location subtitle descriptor matches with the device location configured in the user device 106. More particularly, at step 220, the processor 114 verifies whether the country_code, the state_code and the city_code, corresponding to the country, the state and the city in the location subtitle descriptor, matches with the device country code, the device state code and the device city
17
code configured in the user device 106. If the country, the state and the city defined matches with the device location, then at step 236, the processor 114 identifies a subtitle corresponding to the location subtitle descriptor and save the subtitle in the memory 116 of the user device 106. Thereafter, the processor 114 implements steps (238 to 248) mentioned in figure 2b as described later in the subsequent paragraphs.
[0036] Now, again referring to figure 2a, at step 216, if it is determined that the location subtitle descriptor with the country, the state and the city is not defined or at step 218 if it is determined that the country, the state and the city defined in the location subtitle descriptor does not match with the device location, then at step 222, the processor 114 checks whether a location subtitle descriptor for all cities in a state is defined. If the location subtitle descriptor with all cities in the state is defined, then at step 224, the processor 114 checks for a device state and a device country configured in the user device 106. At step 226, the processor 114 verifies whether the country_code and the state_code corresponding to the country and the state in the location subtitle descriptor matches with the device country code and the device state code configured in the user device 106. If the country_code and the state_code in the location subtitle descriptor matches with the device country code and the device state code configured in the user device 106, then at step 236, the processor 114 identifies a subtitle corresponding to the location subtitle descriptor and save the subtitle in the memory 116 of the user device 106. Thereafter, the processor 114 implements steps (238 to 248) mentioned in figure 2b as described later in the subsequent paragraphs.
[0037] Now, again referring to figure 2a, at step 222, if it is determined that the location subtitle descriptor with all cities in the state is not defined or at step 226 if it is determined that the country_code and the state_code in the location subtitle descriptor does not match with the device country code and the device state code configured in the user device 106, then at step 228, the processor 114 checks whether a location subtitle descriptor for a particular country is defined. If the location subtitle descriptor for the particular country is defined, then at step 230, the processor 114 checks for a device country configured in the user device 106. At step 232, the processor 114 verifies whether the country_code corresponding to the particular country in the location subtitle descriptor matches with the device country code configured in the user device 106. If the country_code in the location subtitle descriptor matches with the device country code in the user device 106, then at step 236, the processor 114 identifies a subtitle corresponding to the location subtitle descriptor and save the subtitle in the memory of the user device 106. Thereafter, the processor 114
18
implements steps (238 to 248) mentioned in figure 2b as described later in the subsequent paragraphs. If, at step 232, it is determined that the country_code corresponding to the particular country in the location subtitle descriptor does not match with the device country code configured in the user device 106, then the processor 114 does not configure any subtitle to be displayed on the user device 106.
[0038] Referring now to figure 2b, after the saving of the subtitle in the memory 116 of the user device 106, at step 238, the processor 114 checks for a subtitle view parameter in the memory 116 of the user device 106. In one example, the subtitle parameter may be a location of subtitle on the screen. For example, the user may want to view the subtitle at the top of the screen or at the bottom of the screen. In another example, the user may want to view a full subtitle or a partial subtitle (e.g.10-20 characters only). In still another example, the subtitle parameter may indicate whether or not the user wants to view the subtitle. If the subtitle view parameter is not configured, then the processor 114, at step 240, may prompt the user to configure enabling of the subtitle viewing. At step 242, the subtitle may be configured by the processor 114 for the user device 106 based upon the subtitle view parameter checked at step 238 and the enabling of the subtitle viewing by the user at step 240. If, at step 240, the user does not enable the subtitle then at step 244, the processor 114 does not configure the subtitle for the user device 106.
[0039] Referring now to Figure 3, a method 300 for broadcasting subtitles in a stream is shown, in accordance with an embodiment of the present subject matter. The method 300 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 300 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0040] The order in which the method 300 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 300 or alternate methods. Additionally, individual blocks may be deleted from the method 300 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware,
19
software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 300 may be considered to be implemented in the above described broadcasting system 102.
[0041] At block 302, a plurality of user subtitles and a plurality of location subtitles may be generated. Further, the plurality of user subtitles may be generated based upon user preferences associated to a plurality of users and the plurality of location subtitles may be generated based upon location of the plurality of users. The plurality of plurality of user subtitles and a plurality of location subtitles may be associated to a program to be broadcasted by the broadcasting system 102. The plurality of plurality of user subtitles and a plurality of location subtitles may be generated by the processing unit 110 of the broadcasting system 102.
[0042] At block 304, each user subtitle and each location subtitle may be mapped to the user preferences, associated with one or more users of the plurality of users, and the location, including language of the location, associated with one or more users, respectively. Each user subtitle and each location subtitle may be mapped by the processing unit 110 of the broadcasting system 102.
[0043] At block 306, a user subtitle descriptor and a location subtitle descriptor may be generated corresponding to each user subtitle and each location subtitle respectively. Each user subtitle descriptor may capture the mapping of a user subtitle with the user preferences associated with at least one user. Further, each location descriptor may capture the mapping of a location subtitle with the location and language associated to at least one user. The user subtitle descriptor and the location subtitle descriptor may be generated by the processing unit 110 of the broadcasting system 102.
[0044] At block 308, each user subtitle descriptor and each location subtitle descriptor may be embedded in program specific information (PSI) table. Each user subtitle descriptor and each location subtitle descriptor may be embedded by the processing unit 110 of the broadcasting system 102.
[0045] At block 310, a multiplexed stream of a video, an audio, the plurality of user subtitles, the plurality of location subtitles, and a plurality of PSI tables may be broadcasted. The multiplexed stream of the video, the audio, the plurality of user subtitles, the plurality of location subtitles, and the plurality of PSI tables may be generated by the processing unit 110
20
via the multiplexer 120 present within the processing unit 110 .The multiplexed stream may be broadcasted by the processing unit 110 of the broadcasting system 102.
[0046] Referring now to Figure 4, a method 400 for decoding subtitles in a stream is shown, in accordance with an embodiment of the present subject matter. The method 400 may be described in the general context of computer executable instructions. Generally, computer executable instructions can include routines, programs, objects, components, data structures, procedures, modules, functions, etc., that perform particular functions or implement particular abstract data types. The method 400 may also be practiced in a distributed computing environment where functions are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, computer executable instructions may be located in both local and remote computer storage media, including memory storage devices.
[0047] The order in which the method 400 is described is not intended to be construed as a limitation, and any number of the described method blocks can be combined in any order to implement the method 400 or alternate methods. Additionally, individual blocks may be deleted from the method 400 without departing from the spirit and scope of the subject matter described herein. Furthermore, the method can be implemented in any suitable hardware, software, firmware, or combination thereof. However, for ease of explanation, in the embodiments described below, the method 400 may be considered to be implemented in the above described user device 106.
[0048] At block 402, a stream comprising a plurality of PSI tables may be received. The plurality of PSI tables may be associated to a plurality of subtitles contained in the stream. The stream comprising the plurality of PSI tables may be received by the processor 114 of the user device 106.
[0049] At block 404, each PSI table may be parsed to identify a plurality of user subtitle descriptors and a plurality of location subtitle descriptors. Each user subtitle descriptor and each location subtitle descriptor may be associated to at least one of the plurality of subtitles. Each PSI table may be parsed by the processor 114 of the user device 106.
[0050] At block 406, device identifier information preconfigured in a memory of the user device may be compared with descriptor-specific identifier information present in each user subtitle descriptor and each location subtitle descriptor. The device identifier information
21
may be compared with the descriptor-specific identifier information by the processor 114 of the user device 106.
[0051] At block 408, at least one of a user subtitle descriptor and a location subtitle descriptor having the descriptor-specific identifier information matched with the device identifier information may be identified based upon the comparison performed at block 406. At least one of the user subtitle descriptor and the location subtitle descriptor may be identified by the processor 114 of the user device 106.
[0052] At block 410, at least one of a user subtitle and a location subtitle, from the plurality of subtitles, corresponding to at least one of the user subtitle descriptor and the location subtitle descriptor, respectively, may be displayed on a screen of the user device 106. At least one of the user subtitle and the location subtitle may be displayed the processor 114 via I/O interface 118 of the user device 106.
[0053] Although implementations for systems, device and methods for broadcasting subtitles in a service/program stream and decoding thereof have been described in language specific to structural features and/or methods, it is to be understood that the appended claims are not necessarily limited to the specific features or methods described. Rather, the specific features and methods are disclosed as examples of implementations for broadcasting subtitles in a service/program stream and decoding thereof.
WE CLAIM:
1. A broadcasting system 102 for broadcasting subtitles in a stream, comprising:
a processing unit 110;
a memory unit 112 coupled with the processing unit 110, wherein the processing unit 110 executes programmed instructions 122 stored in the memory unit 112 in order to:
generate a plurality of user subtitles and a plurality of location subtitles associated with a program to be broadcasted, wherein the plurality of user subtitles are generated based upon user preferences associated to a plurality of users, and wherein the plurality of location subtitles are generated based upon location of the plurality of users;
map each user subtitle to the user preferences associated with one or more users of the plurality of users and each location subtitle to the location, including language of the location, associated with one or more users of the plurality of users;
generate a user subtitle descriptor and a location subtitle descriptor corresponding to each user subtitle and each location subtitle respectively, wherein each user subtitle descriptor captures the mapping of a user subtitle with the user preferences associated with at least one user, and wherein each location descriptor captures the mapping of a location subtitle with the location and language associated to at least one user;
embed each user subtitle descriptor and each location subtitle descriptor in a program specific information (PSI) table; and
broadcast a multiplexed stream of a video, an audio, the plurality of user subtitles, the plurality of location subtitles, and a plurality of PSI tables.
2. The broadcasting system 102 of claim 1, wherein the user preferences comprises at least a user preferred language, user preferred programs, and user preferred advertisements.
3. The broadcasting system 102 of claim 1, wherein each location subtitle descriptor comprises a location_descriptor_tag, a location_descriptor_length, a country_code, a state_code, a city_code, a location_language_code, a location_subtitling_type, a composition_page_id and an ancillary_page_id.
23
4. The broadcasting system 102 of claim 3, wherein at least one of the country_code, the state_code and the city_code indicates the location of the at least one user, and wherein the location_language_code indicates the language associated with the location.
5. The broadcasting system 102 of claim 2, wherein each user subtitle descriptor comprises a user_descriptor tag, a user_descriptor length, a user_country code, a user_state_code, a user_city_code, an operator code, a user_id, a user_id_length, a user_language_code, a user_subtitling_type, a user_composition_page_id and a user_ancillary_page_id.
6. The broadcasting system 102 of claim 5, wherein the user subtitle is mapped with the user preferences based upon at least one of the user_country code, the user_state_code, the user_city_code, the operator code, the user_id, and the user_language_code.
7. The broadcasting system of claim 3, wherein the location subtitle is mapped with at least one of the country_code, the state_code, the city_code and the location_language_code.
8. A user device 106 for decoding subtitles in a stream, comprising:
a processor 114;
a memory 116 coupled with the processor 114, wherein the processor 114 executes programmed instructions 124 stored in the memory 116 in order to:
receive a stream comprising a plurality of PSI tables, wherein the plurality of PSI tables are associated to a plurality of subtitles contained in the stream;
parse each PSI table to identify a plurality of user subtitle descriptors and a plurality of location subtitle descriptors, wherein each user subtitle descriptor and each location subtitle descriptor is associated to at least one of the plurality of subtitles;
compare device identifier information preconfigured in a memory of the user device with descriptor-specific identifier information present in each user subtitle descriptor and each location subtitle descriptor;
identify at least one of a user subtitle descriptor and a location subtitle descriptor having the descriptor-specific identifier information matched with the device identifier information; and
24
display at least one of a user subtitle and a location subtitle, from the plurality of subtitles, corresponding to at least one of the user subtitle descriptor and a location subtitle descriptor respectively.
9. The user device 106 of claim 8, wherein the device identifier information comprises at least one of a user identifier, an operator identifier, a device identifier, an access identifier, and a device location identifier.
10. The user device 106 of claim 9, wherein the device location identifier comprises at least one of a device country code, a device state code, and a device city code.
11. The user device 106 of claim 8, wherein the descriptor-specific identifier information comprises at least one of a country_code, a state_code, a city_code, a user_country code, a user_state_code, a user_city_code, an operator code and a user_id.
12. The user device 106 of claim 8, wherein the memory 116 stores at least one of the user subtitle and the location subtitle when the descriptor-specific identifier information is matched with the device identifier information.
13. A method for broadcasting subtitles in a stream, the method comprising:
generating, by processing unit, a plurality of user subtitles and a plurality of location subtitles associated with a program to be broadcasted, wherein the plurality of user subtitles are generated based upon user preferences associated to a plurality of users, and wherein the plurality of location subtitles are generated based upon location of the plurality of users;
mapping, by the processing unit,
each user subtitle to the user preferences associated with one or more users of the plurality of users, and
each location subtitle to the location, including language of the location, associated with one or more users of the plurality of users;
generating, by the processing unit, a user subtitle descriptor and a location subtitle descriptor corresponding to each user subtitle and each location subtitle respectively, wherein each user subtitle descriptor captures the mapping of a user subtitle with the user preferences
25
associated with at least one user, and wherein each location descriptor captures the mapping of a location subtitle with the location and language associated to at least one user;
embedding, by the processing unit, each user subtitle descriptor and each location subtitle descriptor in a program specific information (PSI) table; and
broadcasting, by the processing unit, a multiplexed stream of a video, an audio, the plurality of user subtitles, the plurality of location subtitles, and a plurality of PSI tables.
14. A method for decoding subtitles in a stream, the method comprising:
receiving, by a processor, a stream comprising a plurality of PSI tables, wherein the plurality of PSI tables are associated to a plurality of subtitles contained in the stream;
parsing, by the processor, each PSI table to identify a plurality of user subtitle descriptors and a plurality of location subtitle descriptors, wherein each user subtitle descriptor and each location subtitle descriptor is associated to at least one of the plurality of subtitles;
comparing, by the processor, device identifier information preconfigured in a memory of the user device with descriptor-specific identifier information present in each user subtitle descriptor and each location subtitle descriptor;
identifying, by the processor, at least one of a user subtitle descriptor and a location subtitle descriptor having the descriptor-specific identifier information matched with the device identifier information; and
displaying, by the processor, at least one of a user subtitle and a location subtitle, from the plurality of subtitles, corresponding to at least one of the user subtitle descriptor and a location subtitle descriptor respectively.
| # | Name | Date |
|---|---|---|
| 1 | Form 9 [27-01-2016(online)].pdf | 2016-01-27 |
| 2 | Form 3 [27-01-2016(online)].pdf | 2016-01-27 |
| 4 | Form 18 [27-01-2016(online)].pdf | 2016-01-27 |
| 5 | Drawing [27-01-2016(online)].pdf | 2016-01-27 |
| 6 | Description(Complete) [27-01-2016(online)].pdf | 2016-01-27 |
| 7 | 201611002951-GPA-(13-05-2016).pdf | 2016-05-13 |
| 8 | 201611002951-Form-1-(13-05-2016).pdf | 2016-05-13 |
| 9 | 201611002951-Correspondence Others-(13-05-2016).pdf | 2016-05-13 |
| 10 | abstract.jpg | 2016-07-11 |
| 11 | 201611002951-FER.pdf | 2019-06-20 |
| 12 | 201611002951-OTHERS [20-12-2019(online)].pdf | 2019-12-20 |
| 13 | 201611002951-FER_SER_REPLY [20-12-2019(online)].pdf | 2019-12-20 |
| 14 | 201611002951-COMPLETE SPECIFICATION [20-12-2019(online)].pdf | 2019-12-20 |
| 15 | 201611002951-CLAIMS [20-12-2019(online)].pdf | 2019-12-20 |
| 16 | 201611002951-ABSTRACT [20-12-2019(online)].pdf | 2019-12-20 |
| 17 | 201611002951-FORM-26 [09-07-2021(online)].pdf | 2021-07-09 |
| 18 | 201611002951-Correspondence to notify the Controller [09-07-2021(online)].pdf | 2021-07-09 |
| 19 | 201611002951-FORM-26 [13-07-2021(online)].pdf | 2021-07-13 |
| 20 | 201611002951-FORM-26 [13-07-2021(online)]-1.pdf | 2021-07-13 |
| 21 | 201611002951-Written submissions and relevant documents [28-07-2021(online)].pdf | 2021-07-28 |
| 22 | 201611002951-US(14)-HearingNotice-(HearingDate-14-07-2021).pdf | 2021-10-17 |
| 23 | 201611002951-PatentCertificate06-12-2021.pdf | 2021-12-06 |
| 24 | 201611002951-IntimationOfGrant06-12-2021.pdf | 2021-12-06 |
| 25 | 201611002951-RELEVANT DOCUMENTS [20-09-2023(online)].pdf | 2023-09-20 |
| 1 | searchstrategy201611002951_13-05-2019.pdf |