Abstract: Abstract : The present invention discloses a system and method for optimising a portfolio of stocks using price distribution curves, predicted by an ensemble of LSTM neural networks. An ensemble of LSTM neural networks evaluates a dataset comprising of past stock prices and technical indicators, of last N periods, to generate a probability distribution curve for future stock prices, that is to say, for the next period. The past dataset is a set of frequency histograms of the high-prices and low prices. Thus, N multi-modal price distribution curves are generated, for N past periods (eg 12 histograms, one for each month of the year), each curve representing the number of times the highest price, or lowest price of a day was within a given price-interval. Each LSTM net of the ensemble performs a multi-class classification on a given stock to predict the future probability distribution curve of the highest prices or the lowest prices of the future period. Finally, an processor converts the probability distribution curve to probability of profit and a probability of loss in the next period, given the last traded price or LTP. This processor is computationally cheap and can be invoked multiple times in a single second to generate the probability of profit and loss. A processor computes the weights of the portfolio stocks by keeping the weight directly proportional to the net profit probability, that is, the difference between probability of profit and that of loss. These weights and the probability distribution curve of the portfolio stocks may be displayed on a client device, for end users to take trading decisions.
Claims:We Claim:
1. A system and method for optimising weights of a stock portfolio; the system comprising:
a processor for generating future price distributions of a given stock, of high prices and low prices for a predetermined future window, the processor comprising
a pre-processing component that can generate the historical price distributions of each stock by analysing the raw historical prices of each stock;
an ensemble of LSTM Neural Networks, each network of the ensemble trained to predict the probability distribution curve of the high-prices or the low-prices of the future time-period under analysis of a specific stock;
a storage system that keeps a record of a portfolio of stocks for which the price distributions are predicted;
a data processor that consumes the generated price distributions and produces a series of weights for each stock in the portfolio; the data processor comprising:
2. The method of claim 1 wherein the training dataset is generated by computing the price distributions of high-prices and low-prices of N periods for a given stock, each distribution containing the number of times the period’s extremal price (highest or lowest) is within a certain price window.
3. The method of claim 1, wherein, the future price distributions are generated using an ensemble of Artificial Neural networks, with each member of the ensemble consisting of at least one layer of LSTM (Long Short Term Memory) neurons.
4. The system of claim 3 wherein each member of the LSTM ensemble computes the price distribution of one category of price for one specific stock and wherein the plurality of the categories includes the high-prices of low-prices of each sub-period under analysis.
5. The data processor of claim 1 that consumes price distributions generated by the method of claim 3 and produces a series of weights for the portfolio;
6. The method of claim 5 wherein the processor generates weights for each stock by comparing the last traded price of the stock with the price distribution curves, and attempting to maximise probability of profit while minimising probability of loss for the entire portfolio as a whole.
, Description:System and Method for Stock Portfolio Optimisation Based on LSTM Ensemble
Technical Field
The present disclosure belongs to the field of portfolio management and optimisation; and specifically to utilisation of deep neural networks for portfolio optimisation.
Background
Determining weights for a portfolio of stocks is a critical function of all asset managers and investment advisors. The computational method of Modern Portfolio theory (H. Markowitz, “Portfolio selection,” The journal of finance, vol. 7, no. 1,1952, pp. 77–91) is a standard multivariate optimisation (MVO) method that is used to determine weights of the portfolio stocks. The theory works under the assumptions that the market is efficient and that prices follow a normal distribution curve, neither of which are accurate representations of the market (Eugene F. Fama, “The Behavior of Stock-Market Prices”, The Journal of Business Vol. 38, No. 1 (Jan., 1965), pp. 34-105).
In recent past LSTM based neural networks have been used to generate better expected returns to fit into the multivariate analysis (Obeidat et al, “Adaptive Portfolio Asset Allocation Optimization with Deep Learning”, International Journal on Advances in Intelligent Systems, vol 11 no 1 & 2, year 2018). However, the fact that modern portfolio theory assumes normal distribution of stock prices, still holds. Additionally, these methods require one to repeatedly run the neural network as traded prices change, to generate new expected returns, and there is no conceivable method to re-use existing predictions. Therefore, the predictions made by these methods turn stale very fast in real world settings.
Generation of probability distributions with long term dependencies using LSTM has been shown to perform much better than single-step prediction using traditional RNN for complex sequence generation, for example in music composition via LSTM (Eck, D., Schmidhuber, J., “A first look at music composition using LSTM recurrent neural networks”. Istituto Dalle Molle Di Studi Sull Intelligenza Artificiale, 2002). In this case, the probability distribution of chords is predicted from a plurality of available chords - depending upon the “form” of previous chords, and progression of chords.
In realistic environments, the price of any asset keeps changing, as new information keeps flowing through the market. Traditional methods of weight forecasting would require solving the multivariate problem every single time a new information, which can change future price or volatility, is made public. In neural network based implementations, there is absolutely no alternative to running the network repeatedly to generate new weights every time new information is available. This is an expensive process in terms of resources, and more importantly in terms of time.
Most trading systems and portfolio optimisation techniques are based on closing prices of a time-period only. Consequently, the intra-period variance of prices tends to get ignored in the analysis. In a very long time horizon of investment, which is usually followed by institutional investors, it does not have detrimental effect. But in short time horizons, which is relevant to individual retail investors, the intra-period variance can sometimes show a very different picture of risk. Traditional forecasting systems are typically aimed at institutional investors, and therefore tend of ignore this. For short investment horizons, therefore, there is a need to build models while accounting for the intra-period price range, for example, the daily high prices and the daily low prices, instead of just the daily closing prices.
Most implementations of neural network based stock price forecasting, or portfolio optimisation techniques aim to beat the other methods in terms of returns. However, the common retail investors needs are to invest safely, which would require a system whose output can be comprehended by a human advisor or machine, and necessary steps to re-train, or modify the parameters of the system could be taken, if the output is not in line with common sense. For example if the training data represented a bull market, and the current regime is that of a bear market, there is no way to know this when the system generates calls which are limited to “buy”, “sell” or “hold”.
Summary
To avoid the limitations and drawbacks of the above mentioned systems, the present invention discloses a system and method of portfolio optimisation which is based on the price distribution curves of historical prices and prediction of future price distribution curves; and can compute weights multiple times per second, given the last traded price or LTP, and the predicted price distribution curve. The outputted price distribution curve can be visualised and evaluated by a human advisor or an alternative algorithm, and thus, system flaws can be easily detected. For example if the training data represented a bull market, and the current regime is that of a bear market, the predicted price distribution curves clearly conflict common knowledge.
In one embodiment, an artificial neural network ensemble, with each net consisting of at least one LSTM layer, consumes a training data set. The training data set spans over past 20 years data and is divided into periods. A period could be a day, week, month, or year. Each period is further subdivided into sub-periods. For each sub-period, the highest price and the lowest price of the sub-period is known, for each stock of the portfolio. For each period, a high-price distribution curve is generated, which is a frequency histogram of the number of times the highest price of a sub-period was within a certain price interval, in the form of a multi dimensional vector. Similarly, for each period, a low-price distribution curve is generated, which is a frequency histogram of the number of times the lowest price of a sub-period was within a certain price interval. The histograms, in the form of multi-dimensional data vectors, are normalised so that the sum of the frequencies of any distribution curve is unit, thereby representing a probability. A given LSTM net of the ensemble consumes the probability data series of all price intervals for a specific stock. The number of networks in the ensemble equals 2xN where N is the number of stocks in the portfolio. Multiplier of 2 represents two types of curves, the curves representing highest price of each sub-period and the curves representing the lowest price of each sub-period. An LSTM net L(T,S) learns to predict the probability distribution of the price of type T (highest prices or lowest prices) of stock S. Finally, L(T,S) is fed the data vectors of past 36 periods, to generate the probability distribution curve of the next period for the stock S and the price type T (high prices or low price). The outputs of all the LSTM nets that are aimed at a given stock S, together compile the future price distribution curve C for that stock S.
In another embodiment, the distribution curve C is compared to the last traded price (LTP) of the given stock. The data processor computes the probability of profit for a given stock as well as the probability of loss of given stock, given the last traded price or LTP. Thereafter, the processor computes the weights of the stocks in the portfolio in a manner to keep the weight directly proportional net probability of profit of the stock. In case the net probability turns out to be negative, that is if the probability of loss is greater than the probability of profit, the processor sets the weight as 0, which depicts dropping the stock completely from the portfolio.
The present disclosure incorporates the probability distributions of future expected returns while determining weights of the different assets in the portfolio and thereby, tries to avoid the limitations of portfolio optimisation techniques based solely on modern portfolio theory.
This computation of probability of profit and the computation of weights is independent of the past historical data, after the distribution curve C is generated, and is cheap to calculate, given any LTP, and therefore can be computed multiple times in a single second, without having to run the LSTM networks or conduct a computationally intensive optimisation algorithm.
The distribution curve C is consumed by a system which generates a visual representation of the price resistances and displays it on a screen, which can then be seen, comprehended and understood by a human trader or asset manager. This information can be used to override the system recommendation, take a trading decision or in general understand the output logic of the overall system.
Brief Description of Drawings:
The accompanying drawings are included to provide a further understanding of the present disclosure, and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. The diagrams are for illustration only, which thus is not a limitation of the present disclosure.
FIG. 1 is a functional block diagram of a system for implementing a portfolio weights optimisation system, incorporating an LSTM ensemble in the learning unit 108, in accordance with one embodiment of the present disclosure.
FIG. 2 is an illustration of the data pre-processing method leading to generation of the price distribution curves 202 and the subsequent data vector 203.
FIG. 3 is a blown up block diagram of the system of LSTM ensemble 301, composed of multiple LSTM networks [311 through 318] which consumes the data vector 300 and predicts the price distribution curves 302 for future. Specifically, each network consumes a particular data vector. For example, L(S1, HIGH) 311 consumes the input vector 303 which represents distribution curve for stock S1 for the high prices of each day.
Detailed Description:
An LSTM ensemble based portfolio optimisation system is described which takes optimisation decisions on the basis of predicted price distribution curves. This extends the trend-prediction power of LSTM nets to predict a complete price distribution, instead of a single future price, and therefore creates a stochastic model of portfolio optimisation. A duality of curves is predicted for each stock, representing the prices distribution of the high prices and that of low prices for any time interval. The system is intended to be applied in stock exchanges to take trading decisions in an automated manner. The decisions may then be converted into a machine readable instructions to be executed in the form of buy/sell calls placed with the exchanges.
The disclosed method uses probability distribution of high and low prices of each stock in the portfolio of stocks and does not make any assumptions about the way the stock prices are distributed in general. On the contrary, the present invention uses complete information set associated with the price variation, given any distribution of the stock prices. Consequently, the current system uses the real price distributions of the stock prices by consuming them into the LSTM network and then predicting the future price distributions.
Due to the probability distribution curve being a very small dataset in size, for any given stock, the current system can generate weights given the last traded price or LTP and the predicted dual distribution curves, multiple times in a single second. This extends the system’s capability to be used multiple times in the time-window of the predicted probability distribution curve, for example, to validate the trading decisions against a sudden price movement.
The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the scope of the present disclosure as defined by the appended claims.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details.
Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, and firmware and/or by human operators.
Various methods described herein may be practiced by combining one or more machine-readable storage media containing the code according to the present invention with appropriate standard computer hardware to execute the code contained therein. An apparatus for practicing various embodiments of the present invention may involve one or more computers (or one or more processors within a single computer) and storage systems containing or having network access to computer program(s) coded in accordance with various methods described herein, and the method steps of the invention could be accomplished by modules, routines, subroutines, or subparts of a computer program product.
If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not required to be included or have the characteristic.
As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. These exemplary embodiments are provided only for illustrative purposes and so that this disclosure will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. The invention disclosed may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Various modifications will be readily apparent to persons skilled in the art. The general principles defined herein may be applied to other embodiments and applications without departing from the scope of the invention. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure). Also, the terminology and phraseology used is for the purpose of describing exemplary embodiments and should not be considered limiting. Thus, the present invention is to be accorded the widest scope encompassing numerous alternatives, modifications and equivalents consistent with the principles and features disclosed. For purpose of clarity, details relating to technical material that is known in the technical fields related to the invention have not been described in detail so as not to unnecessarily obscure the present invention.
Thus, for example, it will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the entity implementing this invention. Those of ordinary skill in the art further understand that the exemplary hardware, software, processes, methods, and/or operating systems described herein are for illustrative purposes and, thus, are not intended to be limited to any particular named element.
Embodiments of the present invention may be provided as a computer program product, which may include a machine-readable storage medium tangibly embodying thereon instructions, which may be used to program a computer (or other electronic devices) to perform a process. The term “machine-readable storage medium” or “computer-readable storage medium” includes, but is not limited to, fixed (hard) drives, magnetic tape, floppy diskettes, optical disks, compact disc read-only memories (CD-ROMs), and magneto-optical disks, semiconductor memories, such as ROMs, PROMs, random access memories (RAMs), programmable read-only memories (PROMs), erasable PROMs (EPROMs), electrically erasable PROMs (EEPROMs), flash memory, magnetic or optical cards, or other type of media/machine-readable medium suitable for storing electronic instructions (e.g., computer programming code, such as software or firmware).A machine-readable medium may include a non-transitory medium in which data may be stored and that does not include carrier waves and/or transitory electronic signals propagating wirelessly or over wired connections. Examples of a non-transitory medium may include, but are not limited to, a magnetic disk or tape, optical storage media such as compact disk (CD) or digital versatile disk (DVD), flash memory, memory or memory devices. A computer-program product may include code and/or machine-executable instructions that may represent a procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
Furthermore, embodiments may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks (e.g., a computer-program product) may be stored in a machine-readable medium. A processor(s) may perform the necessary tasks.
Systems depicted in some of the figures may be provided in various configurations. In some embodiments, the systems may be configured as a distributed system where one or more components of the system are distributed across one or more networks in a cloud computing system.
Each of the appended claims defines a separate invention, which for infringement purposes is recognized as including equivalents to the various elements or limitations specified in the claims. Depending on the context, all references below to the "invention" may in some cases refer to certain specific embodiments only. In other cases it will be recognized that references to the "invention" will refer to subject matter recited in one or more, but not necessarily all, of the claims.
All methods described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention.
Various terms as used herein are shown below. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing.
With reference to FIG. 1, a computer implemented portfolio optimisation system system 100 employs a learning component 108 in the form of a Long Short Term Memory (LSTM) Network for predicting portfolio weights 117 based on an historical price input 113 of a list of stocks. The system includes memory 104, which stores the instructions 106 in the form of software for performing the exemplary method. A processor 101, in communication with the memory 104, executes the instructions. Computer system 100 also includes one or more input/output (I/O) interface(s) 102, 120 for communicating with external devices, such as computer device(s) 119 which outputs the predicted weights 117 and/or future price distribution curves 118 e.g., via wired or wireless links 103 such as a local area network, telephone line, or a wide area network, such as the Internet. The various hardware components 101,104, 102, 120 of the computer system 100 may be connected by a data/control bus 103. The system may be hosted by one or more computing devices, such as the illustrated server computer 105. The remote computing device 119 may serve as a user interface and/or a user interface may be linked directly to the server computer 105.
The computer system 100 may include one or more of a PC, such as a desktop, a laptop, palmtop computer, portable digital assistant (PDA), server computer, cellular telephone, tablet computer, pager, combination thereof, or other computing device capable of executing instructions for performing the exemplary method.
The memory 104 may represent any type of non-transitory computer readable medium such as random access memory (RAM), read only memory (ROM), magnetic disk or tape, optical disk, flash memory, or holographic memory. In one embodiment, the memory 104 comprises a combination of random access memory and read only memory. The network interface(s) 102,120 may each comprise a modulator/demodulator (MODEM) a router, a cable, and and/or Ethernet port. Memory 104 stores processed data, such as proposed portfolio weights 117, in addition to the instructions 106, and may be distributed over one, two or more computing devices. The digital processor 101 can be variously embodied, such as by a single-core processor, a dual-core processor (or more generally by a multiple-core processor), a digital processor and cooperating math coprocessor, a digital controller, or the like.
The term “software,” as used herein, is intended to encompass any collection or set of instructions executable by a computer or other digital system so as to configure the computer or other digital system to perform the task that is the intent of the software. The term “software” as used herein is intended to encompass such instructions stored in storage medium such as RAM, a hard disk, optical disk, or so forth, and is also intended to encompass so-called “firmware” that is software stored on a ROM or so forth. Such software may be organized in various ways, and may include software components organized as libraries, Internet-based programs stored on a remote server or so forth, source code, interpretive code, object code, directly executable code, and so forth. It is contemplated that the software may invoke system-level code or calls to other software residing on a server or other location to perform certain functions.
The illustrated instructions 106 include a preprocessing component 107, a learning component 108, an probability generator 109, and a weight predictor 110. The learning component 108 is an ensemble of LSTM networks. Briefly, the preprocessing component 107 receives the raw prices 113 and, and stores them in a form which can be input to the learning component 108, i.e, as multidimensional data vectors 121. The learning component 108 learns parameters of the model 122 using the data vectors 121 produced by the pre-processing component 107. The goal of model learning is to adapt parameters of the model 122 to minimize the loss 123 over a sequence of observations taken over a period of time. The trained model generates a probability distribution curve ie. a multi-dimensional vector containing the probability of future price to be in a known price interval, given an input set of data vectors for a list of stocks 114. The Probability Generator outputs the total probability of profit, or loss for the given list of stocks 114 and their known last traded price or LTP. The weight predictor 110 outputs a series of weights 117 for the given list of stocks 114.
The data vectors 121 are generated by the preprocessing component 107 in the following manner. First, the price range for a given stock price chart 200 are defined on the basis of the highest price and lowest price for the period under evaluation. Thereafter, the price range is broken into equal length price intervals 201, say Ni. For example, if the window under evaluation is a year, and the highest price of the stock in the year is Rs 1000 and lowest price in Rs 800, the price intervals could be defined as [800-820, 821-840 … 981-1000]. 4 additional intervals are added, two above the higher limit of the generated intervals and two below, to generate the final price intervals 201. Thus the final set of price intervals of the given price would become [760-780, 781-800, 801-820 …. 981-1000, 1001-1020, 1021-1040]. If the periods are months and sub-periods are days, then the highest prices of each day of the month are iterated over. Every time a day’s high price falls within a certain price interval, the frequency count of that interval is increased by one. For example, if the highest prices encountered for first 5 days are [839, 838, 852, 856, 844] the counts of price intervals would be [821-840: 2, 841-860:3] while all other price intervals have a count of 0. This iteration would stop when the pre-processing component 107 encounters all the prices of the period in the raw prices 113, in this case when the highest prices of all trading days in a month are encountered. Thereafter, all the counts are to be divided by the total number of counts, so that the sum of frequencies is unity. In the preferred embodiment, the counts are always divided by 25. This ensures continuity of moving averages, since the denominator is same. The same process is followed to generate the lowest prices distribution curve. To be consumed by the learning component 108, a stack of the price distributions, including the distributions for past 36 months are taken in the form of a multidimensional data vector 203. For example, the series of probability distributions for a stock S, computed for each month for 36 months, and therefore of the length 36 and width 14 pertaining to 10 intervals and 4 additional intervals, could be fed to the LSTM(T,S,I) which would then learn to predict the next months probability distribution for the same stock. The training is done over 18 years of data and the method of rolling window 204 is used for the purpose. As an example, the price distribution curve or price frequency histogram is illustrated - the daily chart 200 of SENSEX is shown from Jan 1 2017 to Dec 31, 2017. The y-axis shows the price range. Also noticeable is the price intervals 201 into which the y-axis is broken. Thereafter, the price distribution curve 202 is generated, which is the frequency or the number of days the high-price falls within the given price intervals, divided by 25. In the current system, historical probability distributions P for a given stock S, are fed into an LSTM network L(S). 201 shows the complete data set fed to each of the LSTM networks of the ensemble.
The label of each dataset is the probability of the price of type T for stock S to be in the interval I in the next period. Since all the frequencies are scaled using the same denominator, in the preferred embodiment it being 25, the scale is similar across all periods.
The data vector 300 is consumed by the LSTM ensemble 301. The number of networks in the ensemble 301 equals 2xN where N is the number of stocks in the portfolio. Multiplier of 2 represents two types of curves, the high curves and the low curves. An LSTM net L(T,S) learns to predict the probability distribution of the price of type T (highest prices or lowest prices) of stock S. For training, an LSTM network 302 receives a stack of datasets 300 spanning over past 18 years data. Using the rolling window method 204, a large stack of datasets 300 is generated. In the preferred embodiment, the training is done over 18 years and the number of datasets 300 is 18*12 - 1 = 215.
In an embodiment, training and testing split of the data vector 203 is 80:20 and the cross-validation ratio during training is 0.20.
The LSTM ensemble 301 receives the data vector 203 and produces a vector output 302 representing the probabilities of the price to be in the price intervals 201 in the future period.
In the preferred embodiment, each network of the ensemble 301 has 3 LSTM layers, one dropout layer to prevent overfitting, one output layer using softmax function.
In the preferred embodiment, cross-entropy loss function is used to train the ensemble 301.
In an embodiment the output vector 302 is consumed by the probability generator 109 which calculates the total probability of profit and the total probability of loss for each stock in the list 114, given the last traded price or LTP of each stock.
In an embodiment, the probability generator 109 may use the following formula to generate the probability of profit, given the last traded price (LTP):
P(Profit) = sum[P(S,X)*M(S,X)] (for all intervals X with upper bound > LTP) / sum[P(S,X)*M(S,X)] (for all intervals X)
where M is the mean of a price interval of the high price distribution curve and P[S,X] is the probability of the price of stock S to be in that interval X
In an embodiment, the probability generator 109 may use the following formula to generate the probability of loss of a particular stock, given the LTP:
P(Loss) = sum[P(S,X)*M(S,X)] (for all intervals X with lower bound < LTP) / sum[P(S,X)*M(S,X)] (for all intervals X)
where M is the mean of a price interval of the high price distribution curve and P[S,X] is the probability of the price of stock S to be in that interval X
In an embodiment, the probability generator 109 computes the weights of the different stocks in the list 114 in the following manner -
W(stock S) = P(Profit for S) - P(Loss for S) / Sum [P(Profit for X) - P(Loss for X)] (for all stocks X in the portfolio)
The method illustrated in FIG. 1 may be implemented in a computer program product that may be executed on a computer. The computer program product may comprise a non-transitory computer-readable recording medium on which a control program is recorded (stored), such as a disk, hard drive, or the like. Common forms of non-transitory computer-readable media include, for example, floppy disks, flexible disks, hard disks, magnetic tape, or any other magnetic storage medium, CD-ROM, DVD, or any other optical medium, a RAM, a PROM, an EPROM, a FLASH-EPROM, or other memory chip or cartridge, or any other non-transitory medium from which a computer can read and use.
Alternatively, the method may be implemented in transitory media, such as a transmittable carrier wave in which the control program is embodied as a data signal using transmission media, such as acoustic or light waves, such as those generated during radio wave and infrared data communications, and the like.
The exemplary method may be implemented on one or more general purpose computers, special purpose computer(s), a programmed microprocessor or microcontroller and peripheral integrated circuit elements, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmable logic device such as a PLD, PLA, FPGA, Graphical card CPU (GPU), or PAL, or the like. In general, any device, capable of implementing a finite state machine that is in turn capable of implementing the flowchart shown in FIG. 1, can be used to implement the method. As will be appreciated, while the steps of the method may all be computer implemented, in some embodiments, one or more of the steps may be at least partially performed manually.
As will be appreciated, the steps of the method need not all proceed in the order illustrated and fewer, more, or different steps may be performed.
| # | Name | Date |
|---|---|---|
| 1 | 201921013836-STATEMENT OF UNDERTAKING (FORM 3) [05-04-2019(online)].pdf | 2019-04-05 |
| 1 | Abstract1.jpg | 2019-04-08 |
| 2 | 201921013836-COMPLETE SPECIFICATION [05-04-2019(online)].pdf | 2019-04-05 |
| 2 | 201921013836-FORM-9 [05-04-2019(online)].pdf | 2019-04-05 |
| 3 | 201921013836-DECLARATION OF INVENTORSHIP (FORM 5) [05-04-2019(online)].pdf | 2019-04-05 |
| 3 | 201921013836-FORM FOR STARTUP [05-04-2019(online)].pdf | 2019-04-05 |
| 4 | 201921013836-FORM FOR SMALL ENTITY(FORM-28) [05-04-2019(online)].pdf | 2019-04-05 |
| 4 | 201921013836-DRAWINGS [05-04-2019(online)].pdf | 2019-04-05 |
| 5 | 201921013836-EVIDENCE FOR REGISTRATION UNDER SSI [05-04-2019(online)].pdf | 2019-04-05 |
| 5 | 201921013836-FORM 1 [05-04-2019(online)].pdf | 2019-04-05 |
| 6 | 201921013836-EVIDENCE FOR REGISTRATION UNDER SSI(FORM-28) [05-04-2019(online)].pdf | 2019-04-05 |
| 7 | 201921013836-EVIDENCE FOR REGISTRATION UNDER SSI [05-04-2019(online)].pdf | 2019-04-05 |
| 7 | 201921013836-FORM 1 [05-04-2019(online)].pdf | 2019-04-05 |
| 8 | 201921013836-DRAWINGS [05-04-2019(online)].pdf | 2019-04-05 |
| 8 | 201921013836-FORM FOR SMALL ENTITY(FORM-28) [05-04-2019(online)].pdf | 2019-04-05 |
| 9 | 201921013836-DECLARATION OF INVENTORSHIP (FORM 5) [05-04-2019(online)].pdf | 2019-04-05 |
| 10 | 201921013836-COMPLETE SPECIFICATION [05-04-2019(online)].pdf | 2019-04-05 |
| 11 | Abstract1.jpg | 2019-04-08 |