Abstract: Emotional wellbeing, an important aspect of mental well-being, is the emotional aspect of everyday experience. The impact of music on the human mind has been studied for several centuries and has applications in therapy, meditation aids and so on. A method and system to generate music for improving the wellbeing of a person has been provided. The system is using transients in music for intervention based on music. The musical pitch curves are called as pitch-transients, or henceforth, “transients”. The present disclosure operates on the tune rather than an entire audio. The technique offers a wide range of generated tunes. The system is using a two-dimensional technique based on `gradation in time' through the use of tempo changes and ‘gradation in pitch’ through the use of transients.
DESC:FORM 2
THE PATENTS ACT, 1970
(39 of 1970)
&
THE PATENT RULES, 2003
COMPLETE SPECIFICATION
(See Section 10 and Rule 13)
Title of invention:
METHOD AND SYSTEM FOR GENERATING MUSIC FOR IMPROVING EMOTIONAL WELLBEING OF A PERSON
Applicant:
Tata Consultancy Services Limited
A company Incorporated in India under the Companies Act, 1956
Having address:
Nirmal Building, 9th Floor,
Nariman Point, Mumbai 400021,
Maharashtra, India
The following specification particularly describes the invention and the manner in which it is to be performed.
CROSS-REFERENCE TO RELATED APPLICATIONS AND PRIORITY
The present application claims priority from Indian provisional patent application no. 202121049558, filed on October 29, 2021. The entire contents of the aforementioned application are incorporated herein by reference.
TECHNICAL FIELD
The disclosure herein generally relates to the field of enhancing mental wellbeing of a person, and, more particularly, to a method and system for generating music for improving emotional wellbeing of a person.
BACKGROUND
According to the World Health Organization, “health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity”. Emotional wellbeing, an important aspect of mental well-being, is the emotional aspect of everyday experience. Several organizations and enterprises have increased attention on emotional well-being at the workplace. The impact of music on the human mind has been studied for several centuries and has applications in therapy, meditation aids and so on. An important application in the enterprise-context is the use of music as an ‘intervention’ for changing emotional states (e.g. destressing). Such interventions are usually well-known musical creations and often subject-specific.
The mental state is often assessed using Russel’s two-dimensional model of affect, where affect is constituted by two independent axes called valence and arousal. The affect “happy” is an example of positive valence and “sad” is an example of negative valence. Similarly, “angry” is an example of high arousal (negative valence) and “contented” is an example of low arousal (positive valence). If valence remains negative for, say, two days, an intervention can be triggered. One possible intervention is to play appropriate music to nudge the user towards positive valence. Tunes perceived as happy may help a user reach an emotional state with positive valence. However, a user with negative emotional valence may not be ready to listen to such a tune immediately.
The music used in intervention are usually well-known creations and often subject-specific. The choice in the components of music has largely been ignored.
SUMMARY
Embodiments of the present disclosure present technological improvements as solutions to one or more of the above-mentioned technical problems recognized by the inventors in conventional systems. For example, in one embodiment, a system for generating music for improving emotional wellbeing of a person, the system comprising a user interface, one or more hardware processors and a memory. The user interface receives corresponding to the person from a tune database, wherein the music is related to a target reference tune, wherein the target reference tune is chosen subjectively based on a present state of the person. The memory is in communication with the one or more hardware processors, wherein the one or more first hardware processors are configured to execute programmed instructions stored the memory, the memory further configured to: convert the music into an initial reference tune, wherein the initial reference tune is perceived as subjectively related to the target reference tune; generate a plurality of time graded initial reference tunes, wherein the gradation in time is provided by varying a tempo in the initial reference tune; generate a plurality of pitch graded tunes from the plurality of time graded initial reference tunes, wherein the gradation in pitch is provided by varying a plurality of transients in the music; and provide one of a pre-selected sequence of tunes or a sequence of tunes selected according to a subjectively preferred order of tunes selected from the plurality of time graded initial tunes and the plurality of pitch graded tunes, followed by the target reference tune.
In another aspect, a method for generating music for improving emotional wellbeing of a person is provided. Initially, a music corresponding to the person from a tune database is received via a user interface, wherein the music is related to a target reference tune, wherein the target reference tune is chosen subjectively based on a present state of the person. Further, the music is converted into an initial reference tune, wherein the initial reference tune is perceived as subjectively related to the target reference tune. In the next step, a plurality of time graded initial reference tunes is generated, wherein the gradation in time is provided by varying a tempo in the initial reference tune. Further, a plurality of pitch graded tunes is generated from the plurality of time graded initial reference tunes, wherein the gradation in pitch is provided by varying a plurality of transients in the music. And finally, one of a pre-selected sequence of tunes or a sequence of tunes selected according to a subjectively preferred order of tunes selected from the plurality of time graded initial tunes and the plurality of pitch graded tunes, followed by the target reference tune is provided.
In yet another aspect, there are provided one or more non-transitory machine-readable information storage mediums comprising one or more instructions which when executed by one or more hardware processors cause generating music for improving emotional wellbeing of a person is provided. Initially, a music corresponding to the person from a tune database is received via a user interface, wherein the music is related to a target reference tune, wherein the target reference tune is chosen subjectively based on a present state of the person. Further, the music is converted into an initial reference tune, wherein the initial reference tune is perceived as subjectively related to the target reference tune. In the next step, a plurality of time graded initial reference tunes is generated, wherein the gradation in time is provided by varying a tempo in the initial reference tune. Further, a plurality of pitch graded tunes is generated from the plurality of time graded initial reference tunes, wherein the gradation in pitch is provided by varying a plurality of transients in the music. And finally, one of a pre-selected sequence of tunes or a sequence of tunes selected according to a subjectively preferred order of tunes selected from the plurality of time graded initial tunes and the plurality of pitch graded tunes, followed by the target reference tune is provided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate exemplary embodiments and, together with the description, serve to explain the disclosed principles:
FIG. 1 illustrates a block diagram of a system for generating music for improving emotional wellbeing of a person according to some embodiments of the present disclosure.
FIG. 2 is an intervention system architecture for improving emotional wellbeing of the person according to some embodiments of the present disclosure.
FIG. 3 is a graphical representation of note-vs-time curve for a time period in accordance with some embodiments of the present disclosure.
FIG. 4 is a graphical representation of combined time and pitch gradation according to some embodiments of the present disclosure.
FIG. 5 is a flow diagram illustrating a method for generating music for improving emotional wellbeing of the person in accordance with some embodiments of the present disclosure.
FIG. 6 shows a graphical representation of perceived change in emotional valence of transients relative to the reference in accordance with some embodiments of the present disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
Exemplary embodiments are described with reference to the accompanying drawings. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. Wherever convenient, the same reference numbers are used throughout the drawings to refer to the same or like parts. While examples and features of disclosed principles are described herein, modifications, adaptations, and other implementations are possible without departing from the scope of the disclosed embodiments.
Emotional wellbeing, an important aspect of mental well-being, is the emotional aspect of everyday experience. The impact of music on the human mind has been studied for several centuries and has applications in therapy, meditation aids and so on. An important application in the enterprise-context is the use of music as an ‘intervention’ for changing emotional states (e.g. destressing). Such interventions are usually well-known musical creations and often subject-specific.
The prior techniques use one possible intervention by playing appropriate music to nudge the user towards positive valence. However, a user with negative emotional valence may not be ready to listen to such a music immediately. Another method uses morphing, but it provides a very generic method to morph between sounds. There is no guarantee that the generated tunes will please listeners.
The present disclosure provides a method and system to generate music for improving the wellbeing of a person. The system is using transients in music for intervention based on music. The musical pitch curves are called as pitch-transients, or henceforth, “transients”. The present disclosure operates on the tune rather than an entire audio and since it is based on musical systems, it sounds musical. Once it sounds musical, the technique offers a wide range of generated tunes. The system is using a two-dimensional technique that also includes ‘gradation in pitch’ through the use of transients.
Consider a user whose mental state is Sad. A possible intervention is to play a tune that can change the mental state of the user to Happy. It is expected that a user who is currently Sad may switch off a tune that is clearly perceived as Happy. It is suggested to start with tunes that are perceived as neutral and change to tunes to those that are perceived as Happier, and finish with a target-reference tune that is clearly perceived as Happy. However, this involves changing tunes, which may also detract the user from continuing to listen to the intervention. Further, a real system will need as many subjective evaluation results as there are tunes.
Referring now to the drawings, and more particularly to FIG. 1 through FIG. 6, where similar reference characters denote corresponding features consistently throughout the figures, there are shown preferred embodiments and these embodiments are described in the context of the following exemplary system and/or method.
FIG. 1 illustrates a block diagram of a system 100 for generating music for improving emotional wellbeing of a person is. Although the present disclosure is explained considering that the system 100 is implemented on a server, it may also be present elsewhere such as a local machine. It may be understood that the system 100 comprises one or more computing devices 102, such as a laptop computer, a desktop computer, a notebook, a workstation, a cloud-based computing environment and the like. It will be understood that the system 100 may be accessed through one or more input/output interfaces 104-1, 104-2... 104-N, collectively referred to as I/O interface 104. Examples of the I/O interface 104 may include, but are not limited to, a user interface, a portable computer, a personal digital assistant, a handheld device, a smartphone, a tablet computer, a workstation and the like. The I/O interface 104 are communicatively coupled to the system 100 through a network 106.
In an embodiment, the network 106 may be a wireless or a wired network, or a combination thereof. In an example, the network 106 can be implemented as a computer network, as one of the different types of networks, such as virtual private network (VPN), intranet, local area network (LAN), wide area network (WAN), the internet, and such. The network 106 may either be a dedicated network or a shared network, which represents an association of the different types of networks that use a variety of protocols, for example, Hypertext Transfer Protocol (HTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and Wireless Application Protocol (WAP), to communicate with each other. Further, the network 106 may include a variety of network devices, including routers, bridges, servers, computing devices, storage devices. The network devices within the network 106 may interact with the system 100 through communication links.
The system 100 may be implemented in a workstation, a mainframe computer, a server, and a network server. In an embodiment, the computing device 102 further comprises one or more hardware processors 108, one or more memory 110, hereinafter referred as a memory 110 and a data repository 112, for example, a repository 112. The memory 110 is in communication with the one or more hardware processors 108, wherein the one or more hardware processors 108 are configured to execute programmed instructions stored in the memory 110, to perform various functions as explained in the later part of the disclosure. The repository 112 may store data processed, received, and generated by the system 100.
The system 100 supports various connectivity options such as BLUETOOTH®, USB, ZigBee and other cellular services. The network environment enables connection of various components of the system 100 using any communication link including Internet, WAN, MAN, and so on. In an exemplary embodiment, the system 100 is implemented to operate as a stand-alone device. In another embodiment, the system 100 may be implemented to work as a loosely coupled device to a smart computing environment. The components and functionalities of the system 100 are described further in detail.
According to an embodiment of the disclosure, the input/output interface 104 is configured to receive a music corresponding to the person from a tune database 112, the tune database 112 is same as the data repository 112 described above. The music is having a target reference tune perceived as happy. The music comprises a plurality of silence-segments, and the duration of each silence-segment in the initial-reference tune is twice the duration of the corresponding silence-segment in the target-reference tune. The target reference tune comprises a plurality of major scale notes and the initial reference tune consists of a plurality of minor scale notes replacing the corresponding major scale notes that are not in the minor scale, wherein the plurality of major scale notes is typically perceived as happy and the plurality of minor-scale notes are perceived as less happy to the person.
A closed-loop intervention of the system 100 is shown as an intervention system architecture in FIG. 2. In this type of system, the user’s mental state is sensed using a plurality of physiological sensors 114. It should be appreciated that the system 100 is always in communication with the plurality of physiological sensors 114. The plurality of physiological sensors 114 is configured to capture various physiological signals such as voice, image, video, photoplethysmogram, temperature, etc. in a mobile/wearable based system. The plurality of physiological signals may also comprise electroencephalogram (EEG), electrocardiogram (ECG), etc. in laboratory conditions. It should be appreciated that the measurement of any other physiological signal is well within the scope of this disclosure.
According to an embodiment of the disclosure, the system 100 is using a technique where the user feels that each tune has changed little compared to the previous one. As the tunes change, it reaches the target-reference tune. If each of these progressive tunes is generated from the target-reference tune, the subjective information is needed about only this tune and the impact of the generation mechanism.
For the sake of clarity in the present disclosure, the major scale notes are perceived as Happy and minor-scale notes are perceived as Sad. Since such associations are subjective, a personalized intervention system may be necessary in practice to determine the target-reference tune. Further, the gradation technique below can be extended to any pair of tunes, identified subjectively or otherwise.
A monophonic tune is considered that is known to be perceived as Happy. If necessary, such a tune may be extracted from polyphonic music. For example, the main melody of composed pieces may be extracted as a monophonic tune. Since this tune is associated with Happy, therefore it is treated as the target-reference tune of the audio intervention. It is hypothesized that the same tune at a much slower tempo, and with the Major scale notes replaced by minor-scale notes, is closer to Sad than Happy. This tune is called as the initial-reference tune, which may be generated in this manner, or by any other means which may take subjective feedback into account. Starting from the initial-reference tune (typically perceived as Sad), gradation is introduced along the two axes of time and pitch to generate a series of successive tunes that leads to the target-reference tune (typically perceived as Happy).
According to an embodiment of the disclosure, the system 100 is configured to introduce the gradation in time to generate a series of tunes. Let the tempo (i.e. speed) of a given tune, T, be R beats per minute; N intermediate tunes are generated, which are increasing in tempo. In the present disclosure, a linear, one-dimensional gradation in time have been used, which is realized by varying the tempo in N linear steps from R/2 to R. The time-gradation factor t of a tune is the ratio of its tempo to R. For the initial-reference tune, T1, t = 0.5 and for the target reference tune, T, t = 1.0. Thus, the duration of each note/silence-segment in the initial-reference tune is twice the duration of the corresponding note/silence-segment in the target-reference tune. For the intermediate tunes, the duration scale inversely as t.
According to an embodiment of the disclosure, the system 100 is also configured to introduce gradation in pitch using transients. The transients were used to introduce a gradation in pitch between the initial- and target-reference tunes. In an example, consider the 12 notes in an octave starting from a reference note, called the key in western music. Note 0, which is the key itself, and Notes 1 to 11 constitute the octave. The major-scale consists of Notes 0, 2, 4, 5, 7, 9, 11. The minor-scale consists of Notes 0, 2, 3, 5, 7, 8, 10. To obtain the initial-reference tune, all occurrences of Notes 4, 9 and 11 are found with respect to the key specified for the target-reference tune. Then each of these notes is replaced with the corresponding minor-scale notes, Notes 3, 8 and 10, respectively. Next, let the duration of the original note be LN. Then, the duration of the replaced note is also LN. To introduce pitch gradation, each replaced note is converted to a pitch curve of duration LN. This pitch curve consists of an anchor note (of fixed pitch) and a pitch transient that follows the anchor.
Thus, notes 2, 7 and 9 serve as the anchors for Notes 3, 8 and 10, and the nominal duration of a transient is set as LT = 300ms. This value is typically observed in professional Carnatic music concerts, and is used in the described embodiment, but other values are also possible and may further be based on other genres of music. In an example, the transients were introduced from the anchor pitch, a to q notes above a. Large transients were used by setting q = 3, to cater to listeners that are not familiar with Carnatic music. However, there are other possible choices: e.g. the transient may be one note lower than the anchor, where q = -1. The duration of the transient is fixed as follows. If LN <= LT, the pitch curve p(t) consists of only a transient of duration LN. If LN > LT, an anchor of duration la = LN - LT precedes the transient. Thus:
p(t)=a,0 =t< l_a……………………………...… (1)
=a+(q) cos?(p (t-t_0)/l_T ),l_a =t<1 …………… (2)
In equation (2), cosine curves are used to model the transients, but other models such as linear, quadratic, cubic have also been found to work. Just as t quantifies the time-gradation, ? is used to quantify gradation in pitch. The quantity ? is defined as the duration of major-scale notes in the pitch curve, relative to the duration of the replaced notes. For the initial-reference tune, ? = 0, and the for target-reference tune, ? = 1.0. For tunes with transients, ? depends on the tempo. For an example target-reference tune at a given tempo, the target-reference tune, the generated initial-reference tune, and the generated transient tune are shown in graphical representation of FIG. 3.
According to an embodiment of the disclosure, the combined time and pitch gradation is shown in graphical representation of FIG. 4. Tunes without transients are marked by cross marks and the tunes with transients are marked by circles. For each time gradation, the pitch gradation value that can be realized is plotted as a dotted line. Exactly one pair of time and pitch gradation is possible due to the constraint imposed by LT = 300 ms. If this constraint is relaxed fully, the entire region in the pitch/time gradation area can be realized. However, not all tunes generated thus are perceived as musical. In practice, for musical tunes, a smaller region can be covered. An example is shown by the thick arrow whose width is governed by the range of LT. This arrow also shows an intervention path made possible by the gradation approach disclosed herein. Without pitch gradation, the only paths possible are horizontal lines at ? = 0 and ? = 1.
FIG. 5 illustrates an example flow chart of a method 500 for generating music for improving emotional wellbeing of a person, in accordance with an example embodiment of the present disclosure. The method 500 depicted in the flow chart may be executed by a system, for example, the system 100 of FIG. 1. In an example embodiment, the system 100 may be embodied in a computing device.
Operations of the flowchart, and combinations of operations in the flowchart, may be implemented by various means, such as hardware, firmware, processor, circuitry and/or other device associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described in various embodiments may be embodied by computer program instructions. In an example embodiment, the computer program instructions, which embody the procedures, described in various embodiments may be stored by at least one memory device of a system and executed by at least one processor in the system. Any such computer program instructions may be loaded onto a computer or other programmable system (for example, hardware) to produce a machine, such that the resulting computer or other programmable system embody means for implementing the operations specified in the flowchart. It will be noted herein that the operations of the method 500 are described with help of system 100. However, the operations of the method 500 can be described and/or practiced by using any other system.
Initially at step 502 of the method 500, a music is received via the user interface 104. The received music is corresponding to the person and the music is taken from a tune database. The music is related to a target reference tune, the target reference tune is chosen subjectively based on a present state of the person.
In the next step 504 of the method 500, the music is converted into an initial reference tune. The initial reference tune is perceived as subjectively related to the target reference tune. For the initial reference tune, the gradation in pitch is quantified as zero. The initial reference tune is decided by the user.
At step 506 of the method 500, a plurality of time graded initial reference tunes is generated, wherein the gradation in time is provided by varying a tempo in the initial reference tune. Further at step 508, a plurality of pitch graded tunes is generated from the plurality of time graded initial reference tunes, wherein the gradation in pitch is provided by varying a plurality of transients in the music
Finally, at step 510 of the method 500, one of a pre-selected sequence of tunes or a sequence of tunes selected according to a subjectively preferred order of tunes selected from the plurality of time graded initial tunes and the plurality of pitch graded tunes is provided to the user, which is followed by providing the target reference tune to the user. The subjectively preferred order of the sequence of tunes starts with the initial reference tune and is followed by the plurality of time graded initial reference tunes if the initial reference tune is perceived as happier than the corresponding pitch graded tune or the sequence of tunes is the plurality of corresponding pitch graded reference tunes if the initial reference tune is perceived as less happy than the corresponding pitch graded tune. The target reference tune is chosen subjectively based on the target reference tune being perceived as happy by the person and the initial reference tune as being perceived less happy as compared to the target reference tune.
The gradation in pitch is introduced by replacing each minor scale note that is not in the major scale, with a pitch-transient relative to an anchor, wherein the anchor is the next note lower in pitch than the minor scale note. The duration of the transient is up to 150 ms, and the anchor occupies any remaining time in the duration of the note. The pitch graded reference tune is close to the tune without pitch grading at the same tempo. Thus, listening to the time and pitch graded reference tune, the person may be expected to see a small change in a tune at a graded step compared to a tune in the previous step of the gradation.
Experimental evaluations and results
Pitch-bend commands were used through a Musical Interface for Digital Instruments (MIDI) [20] to synthesize the pitch curves as described earlier. To understand the effect of gradation, a listening experiment was conducted. In such experiments, participants tend to remember the tunes they heard earlier. Thus, only specific possibilities of nearly 50 tunes were chosen.
Two happy tunes are chosen for the experiment from a prior study. For each tune, its monophonic melody track is referred to as the ‘Fast-major’ flavor. This flavor is at the original tempo but is called ‘Fast’ because happy tunes are typically fast. The Fast-Minor flavor is constructed by replacing the major-scale notes with the corresponding minor-scale notes. The ‘Slow Minor’ flavor is constructed by setting the tempo of the ‘Fast-Minor’ flavor to t = 0.5 times the original tempo. For each minor flavor, the corresponding transient-flavor is constructed as described earlier. The pitch-gradation quantity, ? varies between 0.5 and 0.7. In FIG. 4, the Slow Minor, Fast Minor and Fast Major flavors are marked by crosses and the transient flavors are marked by circles.
The listening experiment consisted of listening to and rating the tunes. First, the Slow Minor flavor of a tune was played, and the participant had to choose the emotion that they associated with it. The list of emotions provided were the eight basic emotions in Plutchik’s model. However, the result is interpreted in terms of positive and negative valence rather than the exact emotions. Let a participant’s response be E1. That is, they associated the emotion E1 with the Slow Minor flavor. Next, the participant listened to the Slow transient flavor and had to rate whether the association with E1 increased, decreased, or remained the same (i.e. no difference). This was repeated for the other tune. The order of the two tunes was randomized across participants.
Next, the participants were presented with the Fast-Minor flavor and they had to choose the emotion E2 they associated best with it. Then, the Fast-transient flavor was provided to them and opined whether its association with E2 had increased, decreased, or showed no difference. Thirdly, the participant chose the emotion E3 that they associated most with the Fast-Major version. Finally, the Fast-transient flavor is provided and the participant opined whether its association with E3 had increased, decreased, or showed no difference.
The survey described in the previous paragraph was completed by 24 participants who gave informed consent. The number of responses for the emotions of the Slow Minor, Fast Minor and Fast Major flavors are categorized as ‘Positive’ and ‘Negative’, and given in Table 1. It is encouraging that the Fast-Major flavor, i.e. the melody tracks of the original tunes, is perceived as happy in 46 out of 48 ratings. This result matches the hypothesis of in a completely different demography. Six out of 48 ratings associate a negative emotion when minor-scale notes are introduced, and 22 ratings out of 48, when the tempo is halved. The effect of transients is given in FIG. 6. In this case, ‘Positive’ means that either (a) the emotion associated with the reference flavor is positive and the transient is perceived to have increased it or (b) the emotion associated with the reference flavor is negative and the transient is perceived to have decreased it. If the participant chose ‘no difference’, it is counted under the ‘Same’ category. The observations are:
The emotional impact of transients is felt more at slow tempi than at fast tempi
When this impact is felt, the tunes with transients are approximately twice as likely to be perceived as more positive than as more negative.
Table 1: Perceived emotional valence of the tune flavors
Flavour Positive Negative
Tune 1 Tune 2 % Tune 1 Tune 2 %
Slow minor 13 13 54 11 11 46
Fast Minor 22 20 87 2 4 13
Fast Major 24 22 96 0 2 4
The written description describes the subject matter herein to enable any person skilled in the art to make and use the embodiments. The scope of the subject matter embodiments is defined by the claims and may include other modifications that occur to those skilled in the art. Such other modifications are intended to be within the scope of the claims if they have similar elements that do not differ from the literal language of the claims or if they include equivalent elements with insubstantial differences from the literal language of the claims.
The disclosure herein addresses unresolved problem related to improving emotional wellness of human being using music. The selection of music for improving emotional wellbeing is not up to the satisfaction of listener. The embodiment thus provides the method and system for generating music for improving emotional wellbeing of a person.
It is to be understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein; such computer-readable storage means contain program-code means for implementation of one or more steps of the method, when the program runs on a server or mobile device or any suitable programmable device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computer like a server or a personal computer, or the like, or any combination thereof. The device may also include means which could be e.g. hardware means like e.g. an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software processing components located therein. Thus, the means can include both hardware means and software means. The method embodiments described herein could be implemented in hardware and software. The device may also include software means. Alternatively, the embodiments may be implemented on different hardware devices, e.g. using a plurality of CPUs, GPUs etc.
The embodiments herein can comprise hardware and software elements. The embodiments that are implemented in software include but are not limited to, firmware, resident software, microcode, etc. The functions performed by various components described herein may be implemented in other components or combinations of other components. For the purposes of this description, a computer-usable or computer readable medium can be any apparatus that can comprise, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
The illustrated steps are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope of the disclosed embodiments. Also, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items. It must also be noted that as used herein and in the appended claims, the singular forms “a,” “an,” and “the” include plural references unless the context clearly dictates otherwise.
Furthermore, one or more computer-readable storage media may be utilized in implementing embodiments consistent with the present disclosure. A computer-readable storage medium refers to any type of physical memory on which information or data readable by a processor may be stored. Thus, a computer-readable storage medium may store instructions for execution by one or more processors, including instructions for causing the processor(s) to perform steps or stages consistent with the embodiments described herein. The term “computer-readable medium” should be understood to include tangible items and exclude carrier waves and transient signals, i.e., be non-transitory. Examples include random access memory (RAM), read-only memory (ROM), volatile memory, nonvolatile memory, hard drives, CD ROMs, DVDs, flash drives, disks, and any other known physical storage media.
It is intended that the disclosure and examples be considered as exemplary only, with a true scope of disclosed embodiments being indicated by the following claims.
,CLAIMS:
1. A processor implemented method (500) for generating music for improving emotional wellbeing of a person, the method comprising:
receiving, via a user interface, a music corresponding to the person from a tune database, wherein the music is related to a target reference tune, wherein the target reference tune is chosen subjectively based on a present state of the person (502);
converting, via one or more hardware processors, the music into an initial reference tune, wherein the initial reference tune is perceived as subjectively related to the target reference tune (504);
generating, via the one or more hardware processors, a plurality of time graded initial reference tunes, wherein the gradation in time is provided by varying a tempo in the initial reference tune (506);
generating, via the one or more hardware processors, a plurality of pitch graded tunes from the plurality of time graded initial reference tunes, wherein the gradation in pitch is provided by varying a plurality of transients in the music (508); and
providing, via the one or more hardware processors, one of a pre-selected sequence of tunes or a sequence of tunes selected according to a subjectively preferred order of tunes selected from the plurality of time graded initial tunes and the plurality of pitch graded tunes, followed by the target reference tune (510).
2. The processor implemented method of claim 1, wherein the subjectively preferred order of the sequence of tunes starts with the initial reference tune and is followed by the plurality of time graded initial reference tunes if the initial reference tune is perceived as happier than the corresponding pitch graded tune or the sequence of tunes is the plurality of corresponding pitch graded reference tunes if the initial reference tune is perceived as less happy than the corresponding pitch graded tune.
3. The processor implemented method of claim 1, wherein the target reference tune is chosen subjectively based on the target reference tune being perceived as happy by the person and the initial reference tune as being perceived less happy as compared to the target reference tune.
4. The processor implemented method of claim 1 further comprising sensing the person’s mental state using a plurality of physiological sensors to capture various physiological signals comprising voice, image, video, photoplethysmogram, temperature and electroencephalogram (EEG).
5. The processor implemented method of claim 1 wherein the music comprises a plurality of silence-segments, and the duration of each silence-segment in the initial-reference tune is twice the duration of the corresponding silence-segment in the target-reference tune.
6. The processor implemented method of claim 1, wherein the target reference tune comprises a plurality of major scale notes and the initial reference tune consists of a plurality of minor scale notes replacing the corresponding major scale notes that are not in the minor scale.
7. The processor implemented method of claim 1, wherein for the initial reference tune, the gradation in pitch is quantified as zero.
8. The processor implemented method of claim 1, wherein the gradation in pitch is introduced by replacing each minor scale note that is not in the major scale, with a pitch-transient relative to an anchor, wherein the anchor is the next note lower in pitch than the minor scale note.
9. The processor implemented method of claim 1, wherein the duration of the transient is up to 150 milliseconds (ms), and the anchor occupies any remaining time in the duration of the note.
10. A system (100) for generating music for improving emotional wellbeing of a person, the system comprising:
a user interface (104) for receiving corresponding to the person from a tune database, wherein the music is related to a target reference tune, wherein the target reference tune is chosen subjectively based on a present state of the person;
one or more hardware processors (108); and
a memory (110) in communication with the one or more hardware processors, wherein the one or more first hardware processors are configured to execute programmed instructions stored the memory, the memory further configured to:
convert the music into an initial reference tune, wherein the initial reference tune is perceived as subjectively related to the target reference tune;
generate a plurality of time graded initial reference tunes, wherein the gradation in time is provided by varying a tempo in the initial reference tune;
generate a plurality of pitch graded tunes from the plurality of time graded initial reference tunes, wherein the gradation in pitch is provided by varying a plurality of transients in the music; and
provide one of a pre-selected sequence of tunes or a sequence of tunes selected according to a subjectively preferred order of tunes selected from the plurality of time graded initial tunes and the plurality of pitch graded tunes, followed by the target reference tune.
| # | Name | Date |
|---|---|---|
| 1 | 202121049558-STATEMENT OF UNDERTAKING (FORM 3) [29-10-2021(online)].pdf | 2021-10-29 |
| 2 | 202121049558-PROVISIONAL SPECIFICATION [29-10-2021(online)].pdf | 2021-10-29 |
| 3 | 202121049558-FORM 1 [29-10-2021(online)].pdf | 2021-10-29 |
| 4 | 202121049558-DRAWINGS [29-10-2021(online)].pdf | 2021-10-29 |
| 5 | 202121049558-DECLARATION OF INVENTORSHIP (FORM 5) [29-10-2021(online)].pdf | 2021-10-29 |
| 6 | 202121049558-Proof of Right [25-03-2022(online)].pdf | 2022-03-25 |
| 7 | 202121049558-FORM-26 [14-04-2022(online)].pdf | 2022-04-14 |
| 8 | 202121049558-FORM 3 [28-10-2022(online)].pdf | 2022-10-28 |
| 9 | 202121049558-FORM 18 [28-10-2022(online)].pdf | 2022-10-28 |
| 10 | 202121049558-ENDORSEMENT BY INVENTORS [28-10-2022(online)].pdf | 2022-10-28 |
| 11 | 202121049558-DRAWING [28-10-2022(online)].pdf | 2022-10-28 |
| 12 | 202121049558-COMPLETE SPECIFICATION [28-10-2022(online)].pdf | 2022-10-28 |
| 13 | Abstract1.jpg | 2022-11-25 |
| 14 | 202121049558-FER.pdf | 2023-12-14 |
| 15 | 202121049558-FER_SER_REPLY [23-04-2024(online)].pdf | 2024-04-23 |
| 16 | 202121049558-COMPLETE SPECIFICATION [23-04-2024(online)].pdf | 2024-04-23 |
| 17 | 202121049558-CLAIMS [23-04-2024(online)].pdf | 2024-04-23 |
| 18 | 202121049558-ABSTRACT [23-04-2024(online)].pdf | 2024-04-23 |
| 19 | 202121049558-US(14)-HearingNotice-(HearingDate-26-03-2025).pdf | 2025-02-25 |
| 20 | 202121049558-Correspondence to notify the Controller [20-03-2025(online)].pdf | 2025-03-20 |
| 21 | 202121049558-FORM-26 [22-03-2025(online)].pdf | 2025-03-22 |
| 22 | 202121049558-FORM-26 [22-03-2025(online)]-1.pdf | 2025-03-22 |
| 23 | 202121049558-Written submissions and relevant documents [09-04-2025(online)].pdf | 2025-04-09 |
| 1 | 202121049558_searchE_13-12-2023.pdf |