Abstract: Methods and systems of the present disclosure are provided for automatically synthesizing music. Methods and systems of the present architecture also relate to digital formulation of musical sequences by means of an automaton using information theoretic principles in order to generate musical compositions that are long random sequences of music. Apart from music creation, the proposed methods can also be used for objectives, including but not limited to, music content quantification, identifying plagiarism, music analysis, and musical transcription (music-to-text conversion).
CLIAMS:We Claim:
1. A system for automatically synthesizing music comprising:
a database of a plurality of musical notes of one or more musical instruments;
a layered architecture formation module configured to form one or more layers, each layer having at least one musical note based on duration of said note in a given musical scale, wherein said one or more layers enable scale-constrained inter-layer and intra-layer transition between notes, and further wherein said transitions are updated in a state transition table based on probability of each transition; and
a triad pattern processing module configured to form a series of state transitions based on said probability of each transition to form a Markov chain, wherein said triad pattern processing module is further configured to identify a triad pattern from said Markov chain having one or more special triad states and to use said triad pattern to automate said music.
2. The system of claim 1, wherein said plurality of musical notes are distributed over one or more octaves.
3. The system of claim 1, wherein said one or more layers are formed based on p-adic time duration.
4. The system of claim 1, wherein said one or more layers are formed based on dyadic time duration.
5. The system of claim 1, wherein said system comprises multiple layered architectures for one or more musical scales, and wherein said system allows transition between notes of said multiple layered architectures.
6. The system of claim 1, wherein system comprises multiple layered architectures for one or more musical scales, and wherein said multiple layered architectures are mixed to form a mixed layered architecture.
7. The system of claim 1, wherein said triad pattern emits an arbitrary length of symbols per pattern, and wherein said triad pattern emits progressive and/or regressive musical patterns over a succession.
8. The system of claim 1, wherein length of said triad pattern is computed by means of a random number, wherein once said triad pattern is emitted, a second triad pattern is obtained based on a shift in musical scale of said pattern by a defined value, and future triad patterns are generated based on said defined value and previous triad pattern.
9. The system of claim 1, wherein silence is incorporated between said state transitions.
10. The system of claim 1, wherein said system further comprises a rhythm synchronisation module configured to process said state transitions to enable a desired pattern to be formed between rhythm and melody.
11. A method for automatically synthesizing music, said comprising the steps of:
generating a database of musical notes of one or more musical instruments;
creating a layered architecture having one or more layers, each layer having at least one musical note based on duration of said note in a given musical scale, wherein said one or more layers enable scale-constrained inter-layer and intra-layer transition between notes, and further wherein said transitions are updated in a state transition table based on probability of each transition; and
forming a series of state transitions based on said probability of each transition to form a Markov chain; and
identifying a triad pattern from said Markov chain having one or more special triad states and using said triad pattern to automate said music.
12. The method of claim 11, wherein said one or more layers are formed based on p-adic time duration.
13. The method of claim 1, wherein said triad pattern emits an arbitrary length of symbols per pattern, and wherein said triad pattern emits progressive and/or regressive musical patterns over a succession.
14. The method of claim 11, wherein length of said triad pattern is computed by means of a random number, wherein once said triad pattern is emitted, a second triad pattern is obtained based on a shift in musical scale of said pattern by a defined value, and future triad patterns are generated based on said defined value and previous triad patterns.
,TagSPECI:FIELD OF THE INVENTION
[0001] Embodiments of the present invention generally relates to music synthesis. In particular, various embodiments relate to systems and methods for automated music synthesis by a machine.
BACKGROUND
[0002] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art.
[0003] Music is a form of art with pleasant sound and silence for communication. Core parameters involved in any genre of music include melody, harmony, rhythm, dynamics, timbre and texture. Melody is the successive arrangement of musical notes that is perceived by the listener as a primary entity in music. Harmony refers to two complementary notes played simultaneously. Rhythm consists of cyclically repeating patterns in music with certain tempo and articulation. Dynamics refers to the control of the velocity/loudness of the musical note, a volume modulation technique in music. Timbre refers to the tone quality of a musical note from a psychoacoustic point of view and is unique for each musical instrument. For instance, musical tone produced by a violin is different from that produced by a veena. Texture refers to the overall quality of music perceived in presence of melody, rhythm and harmonic elements; music comprising of vocal along with several instruments can be regarded as having a thick music texture.
[0004] Classical Indian music has been classified into two broad genres, namely Carnatic and Hindustani, wherein Carnatic music is widely followed by musicians in south India and Hindustani music in the northern part of India. Carnatic music has a fundamental set of 72 melakarta ragas and numerous janya ragas derived from the fundamental set. Musical notes are considered with respect to constraints in a musical scale for the formation of melody sequences. Interval between a musical pitch and another with half or double of its frequency forms an octave.
[0005] In Western music, each octave comprises a series of 12 notes known as semitones, which constitute a chromatic scale. Contrary to this, Arabic and Persian music comprise of quarter-tones. Indian music consists of 7 notes and 5 variant notes that correspond to 12 notes of the European chromatic scale. These 7 notes are non-linearly placed in the ratio 4:3:2:4:4:3:2 based on consonance. Based on the cardinality of notes in ascending and descending octave, there are various melodies such as seven toned major, six toned major, pentatonic minor etc. The seven tones in Indian music corresponding to the heptatonic major scale are labeled as ‘s’, ‘r’, ‘g’, ‘m’, ‘p’, ‘d’, ‘n’. In the western sol-fa equivalent, they correspond to ‘do’, ‘re’, ‘mi’, ‘fa’, ‘so’, ‘la’, ‘ti’.
[0006] In Indian music, all the music notes are relatively chosen with respect to the base note frequency of ‘s’. The ‘s’ note can be chosen at any frequency over a musical scale which depicts the sruti of the carnatic music considered.
[0007] Music can be digitally encoded and embedded within audio or video streams. Digital audio has become a viable alternative to analog audio. In general, in digital audio, sound waves are represented as a series of number values which can be stored as data in a variety of media including hard disks, compact disks, digital audio tape, and computer RAM and ROM. Digital audio uses such data to provide unique and beneficial editing and signal processing capabilities. In digital audio, quantization and sampling processes are used to generate the data representing the amplitude (level) element of sound and the frequency (events over time) element of sound. An analog-to-digital converter (ADC) measures the amplitude of a sound signal in the form of an analog voltage signal at particular instances or samples. The rate at which the ADC takes these measurements is referred to as the sampling rate. Quantization is a process in which the ADC generates a series of binary or digital numbers representing the amplitude measurements. A digital-to-analog converter (DAC) transforms digital data representing sound into analog voltage signals. These analog voltage signals may then be applied to an audio amplifier and speakers for playing sound.
[0008] In history people have always been interested in methods and devices, be it mechanical, electronic or other, to automate the composing and/or playing of music. Especially since the so called MIDI-standard (Musical Instrument Digital Interface) was established in 1983 (MIDI 1.0 specification, Document No. MIDI-1.0, August 1983, International MIDI association), which defines a standard interface through which synthesizers, rhythm machines, computers, etc. can be linked together, substantial research has been devoted to the automated composing and/or playing of music.
[0009] Most of the resulting methods and devices were meant to automatically generate accompaniments to a solo instrument, to compose background music for films, plays and presentations or to produce music to entertain customers and to create the desired atmosphere in for instance restaurants or shops, commonly referred to as 'elevator music' or 'quiet music'.
[00010] One way of producing music automatically is to use an electronic system that produces so-called synthesized music. These systems generally comprise one or more electronic musical instruments or synthesizers and an automatic device producing control signals for them, which signals consist of digital code sequences.
[00011] In the prior art, methods and devices have been disclosed that use statistical approaches and employ for instance Markov processes, in which each musical note, fraction of a musical note, or a group of musical notes are treated as elements of a state space. Music is generated by probability functions stored in memory, starting from an initial musical code sequence (state) to which is added a successor code sequence having the highest probability according to the probability function. Markov process is now in a new state and (part of) the increased sequence is used as a new initial code sequence so that the process endlessly generates control codes for one or more electronic musical devices that produce the resulting music accordingly. Additional rules are however necessary to produce typical musical structures and generate agreeable music. This method requires large amounts of empirical data as well as constraints imposed by the genre of music to generate synthesized music based on a stochastic processes. If suitable constraints are not imposed, the outcome is typically quite monotonous.
[00012] Other prior art methods use heuristical rules based on musical expertise to produce digital code sequences to control musical instruments. This technique is frequently employed in the field of artificial intelligence and is, for instance, disclosed in US Pat. No. 4,926,737 issued to Minamitaka. US Patent. No. 5,418,323 issued to Kohonen, discloses a method for controlling an electronic musical device that does not use heuristical rules, but forms a rule base automatically on the basis of digital training sequences. Algorithm disclosed by Kohonen is based on finding a set of 'grammatical' rules in a sequence of codes representing musical information. Kohonen uses so-called DEC (Dynamically Expanding Context) grammars, which were originally developed for on-line speech recognition. Human speech consists however of sentences that form grammatically well-formed pieces of information of limited length, while musical 'sentences' can be structures of undetermined length (e.g. long improvisations in Jazz). Because of this difference the rule base will have to be updated very frequently during the training phase.
[00013] Construction of rule base from training material can either be static, i.e. based on the input and (batch) processing of existing (previously recorded) code sequences representing musical information, or dynamic, i.e. based on real-time input of codes by an electronic musical device, for instance an electronic musical instrument. Kohonen’s algorithm is reasonably efficient in the case of input and batch processing of existing training sequences but, given the current performance of PC hardware and software, the same inefficient to use for real-time improvisation because of the necessary frequent updates of the rule base.
[00014] Other prior art methods include building a system that recognizes individual tabla strokes played by the musician and transferring them as symbols over a network. To deal with transmission delays, an algorithm predicts the next event by analyzing previous patterns before receiving the original events, and synthesizes an audio output estimate with the appropriate timing. This was first disclosed in M. Sarkar, B. Vercoe, “Recognition and Prediction in a Network Music Performance System for Indian Percussion”. Document W. Chai, B. Vercoe, “Music Thumbnailing via Structural Analysis,” ACM Multimedia Conference, November 2003 discloses implementation of algorithms on automated analysis of musical structure including tonality analysis, recurrent structure analysis and salience analysis to identify the singer from the available audio content. Document Parag Chordia, Alex Rae. “Tabla Gyan: A System for Realtime Tabla Recognition and Resynthesis.” In Proc. of the 2008 International Computer Music Conference (ICMC) have developed algorithms related to predictive music modeling, raag recognition, tabla and mridangam recognition by machine that generates responses in real time.
[00015] It has to be appreciated that music signal processing is an emerging area of research and all existing technologies relating to music synthesis and analysis, music information retrieval system, music acoustics, music transcription, and music source separation are still in initial stages of research. All approaches in music signal processing till date have been implemented with a statistical signal processing point of view using tools such as short time Fourier transforms, time domain autocorrelation functions, harmonic source separation techniques, among others.
[00016] It has been seen that while synthesizing music, existing methods do not allow music with thick textures while incorporating overtones. Similarly, to build music content retrieval system for search engines to detect music sequences with thick textures are also not available in the present art.
[00017] In view of the foregoing, there exists a need for a new system, architecture, and method of music signal processing using information theory in order to analyze information embedded in a musical scale, and to correlate concepts of information theory to user listening experience, and synthesize music with thick textures by incorporating overtones.
[00018] These and all other extrinsic materials discussed herein are incorporated by reference in their entirety. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies and the definition of that term in the reference does not apply.
[00019] Unless the context dictates the contrary, all ranges set forth herein should be interpreted as being inclusive of their end points, and open-ended ranges should be interpreted to include commercially practical values. Similarly, all lists of values should be considered as inclusive of intermediate values unless the context indicates the contrary.
[00020] All publications herein are incorporated by reference to the same extent as if each individual publication or patent application were specifically and individually indicated to be incorporated by reference. Where a definition or use of a term in an incorporated reference is inconsistent or contrary to the definition of that term provided herein, the definition of that term provided herein applies and the definition of that term in the reference does not apply.
[00021] In some embodiments, the numbers expressing quantities of ingredients, properties such as concentration, reaction conditions, and so forth, used to describe and claim certain embodiments of the invention are to be understood as being modified in some instances by the term “about.” Accordingly, in some embodiments, the numerical parameters set forth in the written description and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by a particular embodiment. In some embodiments, the numerical parameters should be construed in light of the number of reported significant digits and by applying ordinary rounding techniques. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of some embodiments of the invention are approximations, the numerical values set forth in the specific examples are reported as precisely as practicable. The numerical values presented in some embodiments of the invention may contain certain errors necessarily resulting from the standard deviation found in their respective testing measurements.
[00022] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
[00023] The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
[00024] Groupings of alternative elements or embodiments of the invention disclosed herein are not to be construed as limitations. Each group member can be referred to and claimed individually or in any combination with other members of the group or other elements found herein. One or more members of a group can be included in, or deleted from, a group for reasons of convenience and/or patentability. When any such inclusion or deletion occurs, the specification is herein deemed to contain the group as modified thus fulfilling the written description of all Markush groups used in the appended claims.
OBJECTS OF THE INVENTION:
[00025] It is an object of the present disclosure to provide a system and method for automatically synthesizing music.
[00026] It is another object of the present disclosure to provide a system and method for detecting plagiarism in musical scripts.
[00027] It is another object of the present disclosure to provide a system and method for generating long random un-repeating sequences of music.
[00028] It is another object of the present disclosure to provide a system and method for mixed layered architectures from one or more scale.
[00029] It is another object of the present disclosure to provide a system and method for latching mixed layered architecture onto triad patterns.
[00030] It is another object of the present disclosure to provide a system and methods for providing rhythm synchronization.
[00031] It is another object of the present disclosure to provide calculation of entropy of musical notes.
[00032] It is another object of the present disclosure to provide a system and method for providing p-adic time duration.
[00033] Various objects, features, aspects and advantages of the present invention will become more apparent from the detailed description of the invention herein below along with the accompanying drawing figures in which like numerals represent like components.
SUMMARY
[00034] Methods and systems of the present disclosure are provided for automatically synthesizing music. Methods and systems of the present architecture also relate to digital formulation of musical sequences by means of an automaton and information theory principles in order to generate musical compositions that are long random sequences of music. Apart from music creation, the proposed methods can also be used for objectives, including but not limited to, music content quantification, identifying plagiarism, music analysis, and musical transcription (music-to-text conversion).
[00035] In one aspect, system of the present disclosure comprises a database of a plurality of musical notes of one or more musical instruments, wherein the one or more notes are distributed over one or more octaves. System of the disclosure can further include a layered architecture formation module configured to form at least one layer having one or more notes of the database. As a given musical scale has multiple notes that may be sung across different note durations, each layer of the system can be divided based on a defined duration for which the respective note in the layer would be heard during a particular musical scale. In an exemplary embodiment, layers of the system can be formed based on p-adic time duration such as dyadic duration, in which, first (bottom-most) layer would be played for 1 symbol duration, second layer would be played for ½ symbol duration, third layer would be played for ¼ symbol duration, and so on. Any other time duration such as 1/3, 1/6, among others can also be incorporated.
[00036] In an exemplary implementation, each layered architecture can correspond to a given musical scale in which the notes are connected with other notes across the same or different layers based on transitions that can possibly take place from one state of musical tone (note) to another state without disturbing the aesthetics of the respective scale to which the layered architecture belongs. Transitions of notes within or across layers can therefore be constrained by the scale to which the respective layered architecture pertains so as to form/follow a Markov random process. Different state transitions can follow different orders of Markov process, wherein a set of state transitions can form a Markov chain, having say N musical notes. Markov processes involve each musical note, fraction of a musical note, or a group of musical notes to be treated as a single stochastic state in a sequence of states. Music can be generated using Markov chains constrained by one or more scales in context by probability functions stored in memory, starting from an initial musical code sequence (state) to which is added a successor code sequence having the highest probability according to the probability function. A newly generated stage can then be used as a new initial code sequence so that the process endlessly generates control codes for one or more electronic musical devices that produce the resulting music accordingly. Additional rules are however necessary to produce typical musical structures and generate agreeable music.
[00037] In another exemplary implementation, different layered architectures of one or more musical scales can be mixed to form a mixed layered architecture, allowing the system to transit from one layered architecture to another and get back, thereby adding more variety to the musical content.
[00038] In another aspect, system of the present disclosure can include a triad pattern processing module configured to identify a triad pattern having one or more special triad states in a Markov chain process, wherein a given triad state can emit arbitrary length of symbols per pattern. When a Markov random process enters into a triad state, its characteristic feature is to emit progressive or regressive musical patterns over a succession. The triad state can create a feel of live musician performance in a concert. In an exemplary implementation, a triad can be defined as a chord comprising 3 different notes, wherein different chord constructions are formed by combinations of different notes of the same scale. When one or more note enters the triad state, a random number is generated and emits the length of the triad pattern, based on which a sequence is emitted, and once an arbitrary length of symbols per pattern is chosen, a first pattern is emitted based on the Markov model. A second pattern is obtained by a shift in a musical scale between note rankings as in the first pattern. Furthermore, the third pattern can be determined by the first and second patterns.
[00039] According to one embodiment, systems and methods of the present disclosure also employ synchronization of rhythm, wherein when the sequences are emitted, associated time durations are noted and a sequence is recorded in memory. In another aspect of the embodiment, silence is incorporated at regular intervals of time to highlight transitions of melody and rhythm in music.
[00040] Additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The aspects of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed.
BRIEF DESCRIPTION OF THE DRAWINGS
[00041] In the Figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.
[00042] FIG. 1 illustrates an exemplary system diagram of machine automated synthesis architecture in accordance with an embodiment of the present invention.
[00043] FIG. 2 illustrates an exemplary layered architecture in accordance with an embodiment of the present invention.
[00044] FIG. 3 illustrates an exemplary display of major triads in accordance with an embodiment of the present invention.
[00045] FIG. 4 illustrates an exemplary display of minor triads in accordance with an embodiment of the present invention.
[00046] FIG. 5 illustrates an exemplary display of augmented and diminished triads in accordance with an embodiment of the present invention.
[00047] FIG. 6 illustrates an exemplary flow diagram for generation of layered architecture in accordance with an embodiment of the present invention.
[00048] FIG. 7 illustrates an exemplary flow diagram for identification of triad pattern based on random Markov chains in accordance with an embodiment of the present invention.
DETAILED DESCRIPTION:
[00049] Embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying figures and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
[00050] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
[00051] Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, firmware and/or by human operators.
[00052] Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The machine-readable medium may include, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).
[00053] Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
[00054] If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
[00055] Various modifications of these embodiments will readily apparent to those skilled in the art in view of present disclosure, and generic method defined herein may be applied to other embodiments.
[00056] All structural and functional equivalents to the elements of the various embodiments of the invention described throughout the disclosure that are known or later come to be known to those ordinary skills in the art are expressly incorporated herein by reference and intended to be encompassed by the invention.
[00057] The above description and drawings are only illustrative of preferred embodiments which achieve the objects, features and advantages of the present invention, and it is not intended that present invention be limited thereto. Any modification of the present invention which comes within the spirit and scope of following claims is considered part of the present invention. Furthermore, to extent that the term “include”, “have” or “like” is used in the description or the claims, such term is intended to be inclusive in manner similar to the term “comprise” is interpreted when employed as a transitional word in claim.
Definitions
[00058] Musical scale: can be defined as a sequence of atomic notes that are braided in an aesthetically constrained manner.
[00059] State: can be defined as an atomic note of a given musical scale, wherein the music scale can be mapped onto a state space.
[00060] Pitch: refers to a fundamental auditory attribute of sound by which musical tones can be ordered from low to high over a musical scale.
[00061] Tone: corresponds to a frequency subject to an instrument/voice. Tones produced by various instruments bear certain characteristic even though they are tuned to the same pitch.
[00062] Markov chain: is a mathematical model involving transitions from one state to another, within a finite possible number of states
Generation of Layered and Mixed Layered Architectures
[00063] Methods and systems of the present disclosure are provided for automatically synthesizing music. Methods and systems of the present architecture also relate to digital formulation of musical sequences by means of an automaton and information theory principles in order to generate musical compositions that are long random sequences of music. Apart from music creation, the proposed methods can also be used for objectives, including but not limited to, music content quantification, identifying plagiarism, music analysis, and musical transcription (music-to-text conversion).
[00064] FIG. 1 illustrates an exemplary system diagram 100 of a machine automated synthesis architecture in accordance with an embodiment of the present invention. In one aspect, system 100 of the present disclosure comprises a database 102 of a plurality of musical notes of one or more musical instruments, wherein the plurality of notes are distributed over one or more octaves. Database 102 can be further configured to store one or more musical scales, sequence of notes including value of each note relative to a standard musical scale, and also the octave and the time duration of each note, wherein octave can be referred to as the interval between a musical pitch and another with half or double of its frequency. Scale and octave portions of the stored information can be used to generate an audio signal of desired frequency.
[00065] In Western music, each octave comprises of series of 12 notes, known as semitones, which constitute a chromatic scale. Contrary to this, Arabic and Persian music comprise of quarter-tones. Indian music consists of 7 notes and 5 variant notes that correspond to 12 notes of the European chromatic scale. The 7 notes are non-linearly placed in the ratio 4:3:2:4:4:3:2 based on consonance. Based on the cardinality of notes in the ascending and descending octave, there are various melodies such as seven toned major, six toned major, pentatonic minor etc. The seven tones in Indian music corresponding to the heptatonic major scale are labeled as ‘s’, ‘r’, ‘g’, ‘m’, ‘p’, ‘d’, ‘n’. In the western sol-fa equivalent, they correspond to ‘do’, ‘re’, ‘mi’, ‘fa’, ‘so’, ‘la’, ‘ti’.
[00066] In Indian music, all the music notes are relatively chosen with respect to the base note frequency of ‘s’. The ‘s’ note can be chosen at any frequency over a musical scale which depicts the sruti of the carnatic music considered. Once, the base frequency of ‘s’ is fixed, the tones ‘s’ and ‘p’ form the invariant perfect fifth and there are variations around other notes as shown in Table I. The frequencies and corresponding rank of the notes above and below the base octave can be determined by a relative linear translation of numbers from Table I.
TABLE I
TONE NAMES, NOTATION, RELATIVE FREQUENCYAND RANKINGSOVER AN OCTAVEAS PER SOUTH-INDIAN MUSICOLOGY
INDIAN MUSIC TONES WESTERN EQUIVALENT NORMALIZED RELATIVE FREQUENCY INDIAN MUSICOLOGY RANKING
NAME NOTATION
Shadja S C 1 1
shuddha rishabha r1 D flat 16/15 2
chatushruti rishabha r2 D 9/8 3
shuddha gandhara g1 E double flat 32/27 3
shatshruti rishabha r3 D sharp 6/5 4
sadharana gandhara g2 E flat
6/5 4
antara gandhara g3 E 5/4 5
shuddha madhyama m1 F 4/3 6
prati madhyama m2 F sharp 45/32 7
anchama
P G 3/2 8
shuddha daivata d1 A flat 8/5 9
chatushruti daivata d2 A 27/16 10
shuddha nishada n1 B double flat 16/9 10
shatshruti daivata d3 A sharp 9/5 11
kaishiki nishada n2 B flat
9/5 11
kakali nishada n3 B 15/8 12
[00067] Assuming that the set S denotes all music notes over an octave, with reference to Table I, south-Indian music comprises of set S = {s, r1,……, n3}. At the same time, let F denote a forbidden set that disallows certain tones from S to be paired together, wherein such a forbidden set can include tones that are accorded the same ranking since they are perceptually close to the human ear and can also include tones that are highly dis-consonant when paired together. In such a case, a music scale RF can be a defined as a constrained sequence of notes arranged over ascending and descending octaves such that any two element subsets of musical sequences from RF do not belong to F. It is also to be appreciated that musical scales S can be asymmetric, i.e., tones in the ascending and descending sequence of a musical scale need not be the same. At the same time, the scales also need not be ordered, i.e., the tones within a scale need not be partially ordered according to the rankings in Table-I, and therefore the scales can have higher ranked tones preceding lower ranked tones.
[00068] According to one embodiment, each musical scale can incorporate multiple lexical constraints that mandate the possible combinations/arrangements of notes only in defined modes. For instance, pentatonic scale with ascending sequence defined as
R(a) = {s,r2,g3,p,d2,s(1)} and descending sequence is defined as R(d) = {s(1),r2,g3,p,d2,s} is known as Mohana. In this scale, the descending sequence exhibits symmetry with the ascending sequence. Superscripts ‘a’ and ‘d’ in R(a) and R(d) denote ascending and descending portions respectively. S(1) indicates the first upper octave signifying that the scale continues to span multiple octaves. Over a 2 octave span during ascent, this scale comprises of notes taking values from the set R = {p(-1), d2(-1), s, r2, g3, p, d2, s(1), r2(1), g3(1), p(1)}. The set of tones constituting the scale are arranged in a strictly increasing order of the tone rankings. Similarly, south-Indian music scale that does not span more than an octave is known as Chittaranjani, wherein the ascending and descending sequences are defined as R(a) = {s,r1,g3,m2,p,d2,p,s(1)} and R(d) = {s(1),n3,d2,m2,g3,r2,s} .
[00069] According to one embodiment, system 100 of the disclosure can include a layered architecture formation module 104 configured to form at least one layer having one or more notes of the database 102. As a given musical scale has multiple notes that may be sung across different note durations, each layer of the system can be divided based on a defined duration for which the respective note in the layer would be heard during a particular musical scale. In an exemplary embodiment, layers of the system can be formed based on p-adic time duration such as dyadic duration, in which, first (bottom-most) layer would be played for 1 symbol duration, second layer would be played for ½ symbol duration, third layer would be played for ¼ symbol duration, and so on. Any other time duration such as 1/3, 1/6, among others can also be incorporated.
[00070] According to one embodiment, the layered architecture, also hereinafter interchangeably referred to as a graph, can be constructed based on emission of music tones of a given musical scale that are of variable duration and therefore take finite set values. Also, the notes themselves are constrained according to a musical scale spanning two or any other number of octaves. In an exemplary implementation, two octaves restriction is to ensure good acoustic quality.
[00071] FIG. 2 illustrates an exemplary layered architecture 200 in accordance with an embodiment of the present invention. As can be seen, the graph 200 shows three layers, wherein the bottom layer has notes that have a duration of 1 symbol duration, middle layer has notes that have a duration of 1/2 symbol duration, and top layer has notes that have a duration of 1/4 symbol duration. Therefore, for a graph say G, which can correspond to one or more musical scales, the notes/ musical tones are divided across layers based on the constraints defined by the musical scales that G represents and further based on the time duration of say 2-(i-1) units. For example, considering a unit duration of time, emission of one symbol from layer 1 is equivalent to emission of two symbols from layer 2 or four symbols from layer 3.
[00072] According to one embodiment, V(Gi) can denote vertices for graph Gi, which may pertain to say a single musical scale. Each vertex/state corresponds to a music tone belonging to the scale. Representation E(Gi) can denote directed edges for graph Gi indicating valid paths satisfying the scale constraints. Assuming Gi U Gj denotes the merger of the graphs Gi and Gi such that |V(Gi o Gj)| = |V(Gi)| + |V(Gj)|, then for any two vertices v1 ? V(Gi) and v2 ? V(Gj) , a directed edge connects v1 and v2 if the scale constraints are satisfied.
[00073] According to one embodiment, overall constrained graph over n layers can be represented by G = G1 o G2 o…….o Gn. Since there is no constraint on the scales in each layer, the graphs Gi and Gj can be identical except for the emission durations i and j.
[00074] Representation 200 shows multiple notes in each layer, wherein each note may be connected with one or more notes in the same layer or in another layer, which are determined based on constraints of the scale to which the respective layered architecture belongs. For instance, a sequence can take place from d2 in layer 2 to g3 in layer 3 but transition cannot take place from s in layer 2 to p(-1) of layer 2. Similarly, transitions can also take place within the same layer from p(-1) of layer 1 to d2(-1) of layer 1 to s of layer 1. According to one embodiment, each transition from one state of a tone to another state can be associated with a probability, which can define the sequence of notes that may be chosen during a random walk of state transition that is undertaken during Markov process implementation. For instance, coming from a state of tone d2(-1) of layer 1 to s, transition can either take place to r2 of layer 1 or to r2 of layer 2, wherein, in an instance, the probability of going to r2 of layer 1 may be 80%, and going from to r2 of layer 2 may be 20%. In such a case, in an exemplary implementation, while creating multiple musical patterns, of 5 times when there is a transition to d2(-1) of layer 1, 4 times the next state transition may take place to r2 of layer 1, and one time the next state transition may take place to r2 of layer 2.
[00075] In another exemplary implementation, different layered architectures of one or more musical scales can be mixed to form a mixed layered architecture, allowing the system to transit from one layered architecture to another and get back, thereby adding more variety to the musical content. This can add more variety to the musical content that similar to that done in popular music such as light or semi-classical stuff as well as in western music. Each layered architecture can also be indicative of more than one musical scales, which can then be inter-mixed with other layered architecture of allied or distinct musical scales in order to enable transition not only intra-layer or inter-layer, but also inter-layered architectures.
Markov Random Processing over Layered Architecture
[00076] Typical Markov chains describe how to get from one event to another. For instance, in case of three events A, B, and C which can come in any order, a Markov chain can describe the probability of what the next note will be, based on a transition table, of say a first-order or multi-order Markov chain. In the instant example, as there are three possible states, either the current note is A, B, or C. For each possible current state, there are three possible next letters. Each row in the transition table can indicate the relative probability of going to the next letter. For example, if one is currently at note A, there is a 20% chance of repeating note A, a 50% chance of going to note B, and a 30% chance of going to note C, such that the sum of changes for each row is 100% (20 + 50 + 30 = 100). A first-order Markov chain means that only the current state affects the choice of the next event. A second-order Markov chain means that the current state and the last state affect the choice of the next event. A third-order Markov chain would indicate that the current state and the last two states in the sequence will affect the choice of the next state.
[00077] In an exemplary implementation, each layered architecture can correspond to a given musical scale in which the notes are connected with other notes across the same or different layers based on transitions that can possibly take place from one state of musical tone (note) to another state without disturbing the aesthetics of the respective scale to which the layered architecture belongs. Transitions of notes within or across layers can therefore be constrained by the scale to which the respective layered architecture pertains so as to form/follow a Markov random process. For instance, with respect to Figure 2, transitions can take place inter-layer such as from p(-1) to d2(-1) and can also take inter-layer from say s in the layer 1 to r2 of the layer 2. Different state transitions can follow different orders of Markov process, wherein a set of state transitions can form a Markov chain, having say N musical notes. Markov processes involve each musical note, fraction of a musical note, or a group of musical notes to be treated as a single stochastic state in a sequence of states. Music can be generated using Markov chains constrained by one or more scales in context by probability functions stored in memory, starting from an initial musical code sequence (state) to which is added a successor code sequence having the highest probability according to the probability function. A newly generated stage can then be used as a new initial code sequence so that the process endlessly generates control codes for one or more electronic musical devices that produce the resulting music accordingly. Additional rules are however necessary to produce typical musical structures and generate agreeable music.
[00078] According to one embodiment therefore, order of the Markov random process for building the automaton (random musical sequences of notes) depends on the constraints of the music scale. In an aspect, each tone a(i) appearing in layer can be represented by a state s(a(i)) within that layer. State transitions from s(a(i)) to s(b(j)) can therefore be modeled, wherein emission of notes at time ‘r’ can be governed by the probability transition pa(1)b(j) = Pr(st = a(i)|st-1 = b(j)) of the layered directed graph.
[00079] According to another embodiment, layered graph for a 1’st order process can be generalized to scales represented by higher order Markov processes. For instance, for a scale characterized by a Markov process of order D, in case a block of music tones in layers and are denoted by a(i) = a1(i) a2(i)……..aD(i) and b(j) = b1(j) b2(j)……..bD(j) , since the scale has memory D, a merger of blocks a(i) and b(j) is valid if the concatenated block a(i) ? b(j) satisfies the scale constraint. Accordingly, states within a graph can be represented by blocks of musical tones consistent with the memory of the Markov process. The states are connected by directed edges if the merger of these blocks denoted by the ? operator forms a valid sequence across all the layers. A random walk over the layered graph produces all musical tones that satisfy this constraint. Transition probabilities can therefore be modeled from real data to accurately represent those transitions that are musically pleasant to hear.
[00080] According to one embodiment, state of the instant disclosure can also be configured to include a super state, wherein the super state, apart from a tone, can also be extended to include an overtone with certain sound pressure level. Emission of tones/overtones is stochastic in nature and can be governed by musical patterns that are generated at the previous instant.
Identification of Triad Pattern
[00081] In another aspect, system of the present disclosure can include a triad pattern processing module 106 configured to identify a triad pattern having one or more special triad states in a Markov chain process, wherein a given triad state can emit arbitrary length of symbols per pattern. A characteristic feature of a triad state is to emit progressive or regressive musical patterns over a succession. Automated Markov random process of the present disclosure can automatically enter into a triad state at certain random intervals of time and emit successive patterns with an arbitrary combination of symbols per pattern. The triad state can create a feel of live musician performance in a concert. In an exemplary implementation, a triad can be defined as a chord comprising 3 different notes, wherein different chord constructions are formed by combinations of different notes of the same scale. When one or more note enters the triad state, a random number is generated and emits the length of the triad pattern, based on which a sequence is emitted, and once an arbitrary length of symbols per pattern is chosen, a first pattern is emitted based on the Markov model. A second pattern is obtained by a shift in a musical scale between note rankings as in the first pattern. Furthermore, the third pattern can be determined by the first and second patterns.
[00082] In an exemplary implementation, when Markov random process latches onto a triad state at certain random interval of time, then a 1’st pattern with arbitrary combination of symbols per pattern can be emitted, following the constraints of the respective musical scale. The 2’nd pattern is obtained by shifting the 1’st pattern created over a music scale by a factor of d, where d can have a value of ±1 or ±2. The representation ‘1’ can signify right shift and ‘2’ can signify left shift. ‘1’ and ‘2’ indicate the number of shifts to be done over the musical scale. Once ‘d’ is fixed and 2’nd pattern is emitted with progressive or regressive pattern, the third pattern can be generated with same value of ‘ d ’ shift with respect to the 2’nd pattern progressively/regressively as was done before. One of the criterion for considering the incorporation of triad patterns into the Markov random process is to create artistically charming patterns and create a feeling of machine synthesized music similar to the live musician performance. Creation of successive patterns in a triad state is explained with an example below:
‘Mohana’ scale over a 2 octave is as shown,
S = {p(-1),d2(-1),s,r2,g3,p,d2,s(1),r2(1),g3(1),p(1)}
When the Markov random process enters into a triad state, an arbitrary number for the length of a base sequence is chosen. Suppose this length is ‘b’, say, the first pattern could be generated as,
T1 = {s,r2,g3,r2,s,d2(-1)}
Then, to create a second pattern, value of is chosen as ± 1 ± 2, In this case, suppose d = +1, then second pattern T2 is obtained by right shifting pattern T1 over a music scale S with same first order finite difference between note rankings,
T2 = {r2,g3,p,g3,r2,s}
Since d for the first note was selected as +1 previously, the third pattern T3 is now fixed with value of ‘d’equal to 1 for the first note and progressively shifted over the music scale with respect to T2.
T3 = {g3,p,d2,p,g3,r2}
Patterns T1,T2 and T3 together form one triad set emitted by the triad state.
[00083] FIG. 3 to FIG. 5 illustrate exemplary triads in accordance with an embodiment of the present invention. According to one embodiment, basic three-note chord types (triads) can be represented by 300 of FIG. 3, wherein major triads 54 are depicted in 300, minor triads 56 in 400 of FIG. 4, and augmented triads 58 and diminished 60 triads in 500 of FIG. 5, respectively. Those of ordinary skill can readily discern the patterns for a large number of other chords. Arrangement of notes generated by different buttons in accordance to the fundamental harmonic relationship as opposed to chromatic or diatonic relationship allows these triads to be played or displayed by a simple shape. Three tones form a triangle in which all of tones are adjacent and contiguous to each other (in the cases of major and minor triads), or in a straight line of three tones (in the cases of augmented or diminished triads), without any other intervening tones, and are thereby contiguous. Large numbers of more complicated chords will not be contiguous, but will still be defined by a spatial pattern that transposes musically by merely shifting along the grid. An example of the educational power of these patterns is that it greatly facilitates learning to play chord progressions.
[00084] Representation 300 shows a major triad 54 forming a triangle of three adjacent notes pointing upwards with the name of the triad taken from the note in the leftmost member. Thus, 302 shows the notes played or displayed for F Major, a subdominant (IV) triad, made of the notes F (34) - A (18) – C (24), 304 shows the notes played or displayed for C Major, a tonic (I) triad, made of the notes C (24) - E (32) – G (38), and 306 shows the notes played or displayed for G Major, a dominant (V) triad, made of the notes G (24) – B (22) – D (28).
[00085] Representation 400 shows a minor triad 56 forming a triangle of three adjacent notes pointing downward with the name of the triad taken from the note in the leftmost member. Thus, 402 shows the notes played or displayed for F Minor, a subdominant (iv) triad, made of the notes F (34) – Ab (24) – C (16), 404 shows the notes played or displayed for C Minor, a tonic (i) triad, made of the notes C (24) – Eb (30) - G (38), and 406 shows the notes played or displayed for G Minor, made of the notes G (24) – Bb (38) – D (28).
[00086] Representation 500 shows an augmented triad 58 that is a straight line of three contiguous notes pointing upwards to the right and the diminished triad 60, which is a straight line of three adjacent notes pointing down and to the right. Thus, 502 shows the notes played or displayed for an augmented triad of Ab (16) – C (24) – E (32), and 504 shows the notes played or displayed for a diminished triad of A (30) – C (24) – Eb (18).
[00087] According to another embodiment, a large variety of input means can be used. Tactile input devices such as touch pads, buttons monitored by opto-electrical switches, lever-like keys, and the like can be used. The input devices can also be activation zones on a computer screen that can be activated by a mouse. The input devices can be sound generating devices or tool activated input devices including idiophones such as gongs, chimes or pipes. The present invention contemplates all such input devices and input means. While the present embodiments are implemented as MIDI devices, the invention is not limited to MIDI implementations and also encompasses alternatives such as hard-wired implementations that do not use MIDI interfaces, or even direct physical playing of a note by mechanical means.
[00088] According to another embodiment, an instrument display has the unique feature of visually representing the harmonic structure of the music being played in an intuitive and immediately appreciable way. The display consists of the same pattern of pitches as for playing the instrument. As each note is played the corresponding display element on the display is lit. This creates a direct visual feedback component which is previously not known. There can be many subtleties, nuances and variations in the way notes are displayed. As an example, one of the more interesting nuances controls the color or brightness of a display element based upon the frequency and/or duration of the pitch's occurrence in the music. This will visually identify tonal centers as the music is being played in a manner which will be clearer to music students than most of them can discern aurally. In any event, even for skilled musicians, the visual cues are generally easier to decipher than aural ones.
Rhythm Synchronization
[00089] According to one embodiment, system 100 can further include a rhythm synchronization module 108 configured to, for an emitted sequence, note down the associated time duration and record a melody sequence in memory, which can also include the database 102 of the instant disclosure. In another aspect of the embodiment, silence can be incorporated at regular intervals of time to highlight transitions of melody and rhythm in music.
[00090] According to another embodiment, rhythm synchronization module 108 can further be configured to enable rhythmic pattern from an electronic drum to resonate with exact time durations corresponding to the melody emission by forcing the alignment. The final pattern comprising of melody and rhythm that are time aligned can be fed to a speaker that produces melody in the foreground and rhythm in the background.
Exemplary Methods
[00091] FIG. 6 illustrates an exemplary flow diagram 600 for generation of layered architecture in accordance with an embodiment of the present invention. At step 602, a database of a plurality of musical notes is created, wherein the musical notes can be spread across different octaves. At 604, one or more layered graphs (collectively referred to layered architecture) can be created for at least one musical scale comprising a plurality of notes such that each layered graph, also referred to as layer hereinafter, comprises a set of notes from the plurality of notes based on p-adic durations of the respective plurality of notes.
[00092] At step 606, a Markov model can be trained/implemented so as to form a transition table between multiple states, wherein each state represents a musical note/tone, such that transition from one state to another forms a sequence of tones/notes in order to generate a musical pattern. In an instance, such transition across states can take place within the layer or across different layers of a given layered architecture. In another instance, transition can also take place across different layered architecture of respective musical scales or combinations thereof. Such transitions are determined and configured based on the constraints defined by the musical scale to which the respective layered architecture pertains. According to one embodiment, each state transition can be associated with a probability that governs how musical patterns are formed or have chances of being formed.
[00093] At 608, random numbers are generated in order to select state transitions between multiple states/notes/tones based on probability of such transitions and length of desired music pattern. For instance, in case the desired length is of 5 states, transition, with reference to Figure 2, can take place from p(-1) of layer 1 -> d2(-1) of layer 1 -> s of layer 1 -> r2 of layer 2 -> g3 of layer 2. Such transitions are referred to based on the state transition tables created in step 606. At step 610, a random walk is made over the layered architecture(s) using random numbers generated in step 608 so as to generate/automate musical patterns based on the state transition tables. At 612, once a first state is chosen in the layered architecture based on the random walk, the next state can be chosen based on correlation between the random number and the state transition table.
[00094] FIG. 7 illustrates an exemplary flow diagram 700 for identification of triad pattern based on random Markov chains in accordance with an embodiment of the present invention. At step 702, during presentation of multiple musical patterns based on Markov chains, a triad state is entered into at random intervals of time such that one or more patterns are emitted, wherein each pattern can have an arbitrary length of symbols. At 704, once an arbitrary length of symbols per pattern is chosen, a first pattern is chosen based on the constrained rules of Markov model.
[00095] At 706, a second pattern can be obtained by a shift in musical scale by a factor of ‘d’ considering the same finite order difference between note rankings as in first pattern. According to one embodiment, ‘d’ can be ±1 or ±2, where ‘+’ signifies right shift and ‘-’ signifies left shift, and wherein ‘1’ and ‘2’ indicate shift to be done over the musical scale. It should be appreciate that any other mode of pattern-based shifting can be incorporated. At 708, third and subsequent patterns are emitted with same shift value of ‘d’ with respect to the second pattern in order to form a triad pattern.
[00096] It should be appreciated that any other step can be incorporate before, after, or during the above mentioned methods. For instance, silence can be incorporated at random intervals of time following constraints in music scales, considered over a p-adic scale. In another instance, steady state distribution of each state and max-entropic rate of music scale can also be calculated in order to depict information content hidden in music. Any other step disclosed above or known commonly in the art can also be incorporated in the instant disclosure.
[00097] As used herein, and unless the context dictates otherwise, the term "coupled to" is intended to include both direct coupling (in which two elements that are coupled to each other contact each other) and indirect coupling (in which at least one additional element is located between the two elements). Therefore, the terms "coupled to" and "coupled with" are used synonymously. Within the context of this document terms "coupled to" and "coupled with" are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.
[00098] It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C …. and N, the text should be interpreted as requiring only one element from the group, not A plus N, or B plus N, etc.
ADVANTAGES OF THE INVENTION
[00099] The present disclosure provides a system and method for synthesizing music digitally by means of an automaton.
[000100] The present disclosure provides a system and method for detecting plagiarism in musical scripts.
[000101] The present disclosure provides a system and method for listen to long random un-repeating sequences of music.
[000102] The present disclosure provides a system and method for latching onto triad patterns.
[000103] The present disclosure provides a system and method for mixed layered architectures from one or more scales.
[000104] The present disclosure provides a system and method for providing rhythm synchronization.
[000105] The present disclosure provides a system and method for providing calculation of entropy of musical notes.
[000106] The present disclosure provides a system and method for providing dyadic and p-adic time duration.
| Section | Controller | Decision Date |
|---|---|---|
| # | Name | Date |
|---|---|---|
| 1 | 336-CHE-2014-EDUCATIONAL INSTITUTION(S) [23-12-2022(online)].pdf | 2022-12-23 |
| 1 | Form 5.pdf | 2014-01-31 |
| 2 | Form 3.pdf | 2014-01-31 |
| 2 | 336-CHE-2014-OTHERS [23-12-2022(online)].pdf | 2022-12-23 |
| 3 | Drawings.pdf | 2014-01-31 |
| 3 | 336-CHE-2014-IntimationOfGrant31-10-2022.pdf | 2022-10-31 |
| 4 | Complete Specification.pdf | 2014-01-31 |
| 4 | 336-CHE-2014-PatentCertificate31-10-2022.pdf | 2022-10-31 |
| 5 | 336-CHE-2014-US(14)-HearingNotice-(HearingDate-09-09-2021).pdf | 2021-10-17 |
| 5 | 336-CHE-2014 POWER OF ATTORNEY 01-04-2014.pdf | 2014-04-01 |
| 6 | 336-CHE-2014-Annexure [21-09-2021(online)].pdf | 2021-09-21 |
| 6 | 336-CHE-2014 FORM-1 01-04-2014.pdf | 2014-04-01 |
| 7 | 336-CHE-2014-Written submissions and relevant documents [21-09-2021(online)].pdf | 2021-09-21 |
| 7 | 336-CHE-2014 CORRESPONDENCE OTHERS 01-04-2014.pdf | 2014-04-01 |
| 8 | 336-CHE-2014-FER.pdf | 2019-10-30 |
| 8 | 336-CHE-2014-Correspondence to notify the Controller [07-09-2021(online)].pdf | 2021-09-07 |
| 9 | 336-CHE-2014-FORM-26 [10-12-2019(online)].pdf | 2019-12-10 |
| 9 | 336-CHE-2014-ABSTRACT [10-12-2019(online)].pdf | 2019-12-10 |
| 10 | 336-CHE-2014-CLAIMS [10-12-2019(online)].pdf | 2019-12-10 |
| 10 | 336-CHE-2014-FER_SER_REPLY [10-12-2019(online)].pdf | 2019-12-10 |
| 11 | 336-CHE-2014-COMPLETE SPECIFICATION [10-12-2019(online)].pdf | 2019-12-10 |
| 11 | 336-CHE-2014-DRAWING [10-12-2019(online)].pdf | 2019-12-10 |
| 12 | 336-CHE-2014-CORRESPONDENCE [10-12-2019(online)].pdf | 2019-12-10 |
| 13 | 336-CHE-2014-COMPLETE SPECIFICATION [10-12-2019(online)].pdf | 2019-12-10 |
| 13 | 336-CHE-2014-DRAWING [10-12-2019(online)].pdf | 2019-12-10 |
| 14 | 336-CHE-2014-CLAIMS [10-12-2019(online)].pdf | 2019-12-10 |
| 14 | 336-CHE-2014-FER_SER_REPLY [10-12-2019(online)].pdf | 2019-12-10 |
| 15 | 336-CHE-2014-ABSTRACT [10-12-2019(online)].pdf | 2019-12-10 |
| 15 | 336-CHE-2014-FORM-26 [10-12-2019(online)].pdf | 2019-12-10 |
| 16 | 336-CHE-2014-Correspondence to notify the Controller [07-09-2021(online)].pdf | 2021-09-07 |
| 16 | 336-CHE-2014-FER.pdf | 2019-10-30 |
| 17 | 336-CHE-2014 CORRESPONDENCE OTHERS 01-04-2014.pdf | 2014-04-01 |
| 17 | 336-CHE-2014-Written submissions and relevant documents [21-09-2021(online)].pdf | 2021-09-21 |
| 18 | 336-CHE-2014 FORM-1 01-04-2014.pdf | 2014-04-01 |
| 18 | 336-CHE-2014-Annexure [21-09-2021(online)].pdf | 2021-09-21 |
| 19 | 336-CHE-2014 POWER OF ATTORNEY 01-04-2014.pdf | 2014-04-01 |
| 19 | 336-CHE-2014-US(14)-HearingNotice-(HearingDate-09-09-2021).pdf | 2021-10-17 |
| 20 | Complete Specification.pdf | 2014-01-31 |
| 20 | 336-CHE-2014-PatentCertificate31-10-2022.pdf | 2022-10-31 |
| 21 | Drawings.pdf | 2014-01-31 |
| 21 | 336-CHE-2014-IntimationOfGrant31-10-2022.pdf | 2022-10-31 |
| 22 | Form 3.pdf | 2014-01-31 |
| 22 | 336-CHE-2014-OTHERS [23-12-2022(online)].pdf | 2022-12-23 |
| 23 | Form 5.pdf | 2014-01-31 |
| 23 | 336-CHE-2014-EDUCATIONAL INSTITUTION(S) [23-12-2022(online)].pdf | 2022-12-23 |
| 1 | latestsearchstrategy_03-10-2019.pdf |