Adaptivity: Measures of Dominant Cycles and Price Trend [Loxx]Adaptivity: Measures of Dominant Cycles and Price Trend is an indicator that outputs adaptive lengths using various methods for dominant cycle and price trend timeframe adaptivity. While the information output from this indicator might be useful for the average trader in one off circumstances, this indicator is really meant for those need a quick comparison of dynamic length outputs who wish to fine turn algorithms and/or create adaptive indicators.
This indicator compares adaptive output lengths of all publicly known adaptive measures. Additional adaptive measures will be added as they are discovered and made public.
The first released of this indicator includes 6 measures. An additional three measures will be added with updates. Please check back regularly for new measures.
Ehers:
Autocorrelation Periodogram
Band-pass
Instantaneous Cycle
Hilbert Transformer
Dual Differentiator
Phase Accumulation (future release)
Homodyne (future release)
Jurik:
Composite Fractal Behavior (CFB)
Adam White:
Veritical Horizontal Filter (VHF) (future release)
What is an adaptive cycle, and what is Ehlers Autocorrelation Periodogram Algorithm?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 135:
"Adaptive filters can have several different meanings. For example, Perry Kaufman's adaptive moving average (KAMA) and Tushar Chande's variable index dynamic average (VIDYA) adapt to changes in volatility . By definition, these filters are reactive to price changes, and therefore they close the barn door after the horse is gone.The adaptive filters discussed in this chapter are the familiar Stochastic , relative strength index (RSI), commodity channel index (CCI), and band-pass filter.The key parameter in each case is the look-back period used to calculate the indicator. This look-back period is commonly a fixed value. However, since the measured cycle period is changing, it makes sense to adapt these indicators to the measured cycle period. When tradable market cycles are observed, they tend to persist for a short while.Therefore, by tuning the indicators to the measure cycle period they are optimized for current conditions and can even have predictive characteristics.
The dominant cycle period is measured using the Autocorrelation Periodogram Algorithm. That dominant cycle dynamically sets the look-back period for the indicators. I employ my own streamlined computation for the indicators that provide smoother and easier to interpret outputs than traditional methods. Further, the indicator codes have been modified to remove the effects of spectral dilation.This basically creates a whole new set of indicators for your trading arsenal."
What is this Hilbert Transformer?
An analytic signal allows for time-variable parameters and is a generalization of the phasor concept, which is restricted to time-invariant amplitude, phase, and frequency. The analytic representation of a real-valued function or signal facilitates many mathematical manipulations of the signal. For example, computing the phase of a signal or the power in the wave is much simpler using analytic signals.
The Hilbert transformer is the technique to create an analytic signal from a real one. The conventional Hilbert transformer is theoretically an infinite-length FIR filter. Even when the filter length is truncated to a useful but finite length, the induced lag is far too large to make the transformer useful for trading.
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, pages 186-187:
"I want to emphasize that the only reason for including this section is for completeness. Unless you are interested in research, I suggest you skip this section entirely. To further emphasize my point, do not use the code for trading. A vastly superior approach to compute the dominant cycle in the price data is the autocorrelation periodogram. The code is included because the reader may be able to capitalize on the algorithms in a way that I do not see. All the algorithms encapsulated in the code operate reasonably well on theoretical waveforms that have no noise component. My conjecture at this time is that the sample-to-sample noise simply swamps the computation of the rate change of phase, and therefore the resulting calculations to find the dominant cycle are basically worthless.The imaginary component of the Hilbert transformer cannot be smoothed as was done in the Hilbert transformer indicator because the smoothing destroys the orthogonality of the imaginary component."
What is the Dual Differentiator, a subset of Hilbert Transformer?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 187:
"The first algorithm to compute the dominant cycle is called the dual differentiator. In this case, the phase angle is computed from the analytic signal as the arctangent of the ratio of the imaginary component to the real component. Further, the angular frequency is defined as the rate change of phase. We can use these facts to derive the cycle period."
What is the Phase Accumulation, a subset of Hilbert Transformer?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 189:
"The next algorithm to compute the dominant cycle is the phase accumulation method. The phase accumulation method of computing the dominant cycle is perhaps the easiest to comprehend. In this technique, we measure the phase at each sample by taking the arctangent of the ratio of the quadrature component to the in-phase component. A delta phase is generated by taking the difference of the phase between successive samples. At each sample we can then look backwards, adding up the delta phases.When the sum of the delta phases reaches 360 degrees, we must have passed through one full cycle, on average.The process is repeated for each new sample.
The phase accumulation method of cycle measurement always uses one full cycle's worth of historical data.This is both an advantage and a disadvantage.The advantage is the lag in obtaining the answer scales directly with the cycle period.That is, the measurement of a short cycle period has less lag than the measurement of a longer cycle period. However, the number of samples used in making the measurement means the averaging period is variable with cycle period. longer averaging reduces the noise level compared to the signal.Therefore, shorter cycle periods necessarily have a higher out- put signal-to-noise ratio."
What is the Homodyne, a subset of Hilbert Transformer?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 192:
"The third algorithm for computing the dominant cycle is the homodyne approach. Homodyne means the signal is multiplied by itself. More precisely, we want to multiply the signal of the current bar with the complex value of the signal one bar ago. The complex conjugate is, by definition, a complex number whose sign of the imaginary component has been reversed."
What is the Instantaneous Cycle?
The Instantaneous Cycle Period Measurement was authored by John Ehlers; it is built upon his Hilbert Transform Indicator.
From his Ehlers' book Cybernetic Analysis for Stocks and Futures: Cutting-Edge DSP Technology to Improve Your Trading by John F. Ehlers, 2004, page 107:
"It is obvious that cycles exist in the market. They can be found on any chart by the most casual observer. What is not so clear is how to identify those cycles in real time and how to take advantage of their existence. When Welles Wilder first introduced the relative strength index (rsi), I was curious as to why he selected 14 bars as the basis of his calculations. I reasoned that if i knew the correct market conditions, then i could make indicators such as the rsi adaptive to those conditions. Cycles were the answer. I knew cycles could be measured. Once i had the cyclic measurement, a host of automatically adaptive indicators could follow.
Measurement of market cycles is not easy. The signal-to-noise ratio is often very low, making measurement difficult even using a good measurement technique. Additionally, the measurements theoretically involve simultaneously solving a triple infinity of parameter values. The parameters required for the general solutions were frequency, amplitude, and phase. Some standard engineering tools, like fast fourier transforms (ffs), are simply not appropriate for measuring market cycles because ffts cannot simultaneously meet the stationarity constraints and produce results with reasonable resolution. Therefore i introduced maximum entropy spectral analysis (mesa) for the measurement of market cycles. This approach, originally developed to interpret seismographic information for oil exploration, produces high-resolution outputs with an exceptionally short amount of information. A short data length improves the probability of having nearly stationary data. Stationary data means that frequency and amplitude are constant over the length of the data. I noticed over the years that the cycles were ephemeral. Their periods would be continuously increasing and decreasing. Their amplitudes also were changing, giving variable signal-to-noise ratio conditions. Although all this is going on with the cyclic components, the enduring characteristic is that generally only one tradable cycle at a time is present for the data set being used. I prefer the term dominant cycle to denote that one component. The assumption that there is only one cycle in the data collapses the difficulty of the measurement process dramatically."
What is the Band-pass Cycle?
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 47:
"Perhaps the least appreciated and most underutilized filter in technical analysis is the band-pass filter. The band-pass filter simultaneously diminishes the amplitude at low frequencies, qualifying it as a detrender, and diminishes the amplitude at high frequencies, qualifying it as a data smoother. It passes only those frequency components from input to output in which the trader is interested. The filtering produced by a band-pass filter is superior because the rejection in the stop bands is related to its bandwidth. The degree of rejection of undesired frequency components is called selectivity. The band-stop filter is the dual of the band-pass filter. It rejects a band of frequency components as a notch at the output and passes all other frequency components virtually unattenuated. Since the bandwidth of the deep rejection in the notch is relatively narrow and since the spectrum of market cycles is relatively broad due to systemic noise, the band-stop filter has little application in trading."
From his Ehlers' book Cycle Analytics for Traders Advanced Technical Trading Concepts by John F. Ehlers , 2013, page 59:
"The band-pass filter can be used as a relatively simple measurement of the dominant cycle. A cycle is complete when the waveform crosses zero two times from the last zero crossing. Therefore, each successive zero crossing of the indicator marks a half cycle period. We can establish the dominant cycle period as twice the spacing between successive zero crossings."
What is Composite Fractal Behavior (CFB)?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is VHF Adaptive Cycle?
Vertical Horizontal Filter (VHF) was created by Adam White to identify trending and ranging markets. VHF measures the level of trend activity, similar to ADX DI. Vertical Horizontal Filter does not, itself, generate trading signals, but determines whether signals are taken from trend or momentum indicators. Using this trend information, one is then able to derive an average cycle length.
Recherche dans les scripts pour "algo"
Machine Learning: LVQ-based StrategyLVQ-based Strategy (FX and Crypto)
Description:
Learning Vector Quantization (LVQ) can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all learning-based approach. It is based on prototype supervised learning classification task and trains its weights through a competitive learning algorithm.
Algorithm:
Initialize weights
Train for 1 to N number of epochs
- Select a training example
- Compute the winning vector
- Update the winning vector
Classify test sample
The LVQ algorithm offers a framework to test various indicators easily to see if they have got any *predictive value*. One can easily add cog, wpr and others.
Note: TradingViews's playback feature helps to see this strategy in action. The algo is tested with BTCUSD/1Hour.
Warning: This is a preliminary version! Signals ARE repainting.
***Warning***: Signals LARGELY depend on hyperparams (lrate and epochs).
Style tags: Trend Following, Trend Analysis
Asset class: Equities, Futures, ETFs, Currencies and Commodities
Dataset: FX Minutes/Hours+++/Days
Endpointed SSA of Price [Loxx]The Endpointed SSA of Price: A Comprehensive Tool for Market Analysis and Decision-Making
The financial markets present sophisticated challenges for traders and investors as they navigate the complexities of market behavior. To effectively interpret and capitalize on these complexities, it is crucial to employ powerful analytical tools that can reveal hidden patterns and trends. One such tool is the Endpointed SSA of Price, which combines the strengths of Caterpillar Singular Spectrum Analysis, a sophisticated time series decomposition method, with insights from the fields of economics, artificial intelligence, and machine learning.
The Endpointed SSA of Price has its roots in the interdisciplinary fusion of mathematical techniques, economic understanding, and advancements in artificial intelligence. This unique combination allows for a versatile and reliable tool that can aid traders and investors in making informed decisions based on comprehensive market analysis.
The Endpointed SSA of Price is not only valuable for experienced traders but also serves as a useful resource for those new to the financial markets. By providing a deeper understanding of market forces, this innovative indicator equips users with the knowledge and confidence to better assess risks and opportunities in their financial pursuits.
█ Exploring Caterpillar SSA: Applications in AI, Machine Learning, and Finance
Caterpillar SSA (Singular Spectrum Analysis) is a non-parametric method for time series analysis and signal processing. It is based on a combination of principles from classical time series analysis, multivariate statistics, and the theory of random processes. The method was initially developed in the early 1990s by a group of Russian mathematicians, including Golyandina, Nekrutkin, and Zhigljavsky.
Background Information:
SSA is an advanced technique for decomposing time series data into a sum of interpretable components, such as trend, seasonality, and noise. This decomposition allows for a better understanding of the underlying structure of the data and facilitates forecasting, smoothing, and anomaly detection. Caterpillar SSA is a particular implementation of SSA that has proven to be computationally efficient and effective for handling large datasets.
Uses in AI and Machine Learning:
In recent years, Caterpillar SSA has found applications in various fields of artificial intelligence (AI) and machine learning. Some of these applications include:
1. Feature extraction: Caterpillar SSA can be used to extract meaningful features from time series data, which can then serve as inputs for machine learning models. These features can help improve the performance of various models, such as regression, classification, and clustering algorithms.
2. Dimensionality reduction: Caterpillar SSA can be employed as a dimensionality reduction technique, similar to Principal Component Analysis (PCA). It helps identify the most significant components of a high-dimensional dataset, reducing the computational complexity and mitigating the "curse of dimensionality" in machine learning tasks.
3. Anomaly detection: The decomposition of a time series into interpretable components through Caterpillar SSA can help in identifying unusual patterns or outliers in the data. Machine learning models trained on these decomposed components can detect anomalies more effectively, as the noise component is separated from the signal.
4. Forecasting: Caterpillar SSA has been used in combination with machine learning techniques, such as neural networks, to improve forecasting accuracy. By decomposing a time series into its underlying components, machine learning models can better capture the trends and seasonality in the data, resulting in more accurate predictions.
Application in Financial Markets and Economics:
Caterpillar SSA has been employed in various domains within financial markets and economics. Some notable applications include:
1. Stock price analysis: Caterpillar SSA can be used to analyze and forecast stock prices by decomposing them into trend, seasonal, and noise components. This decomposition can help traders and investors better understand market dynamics, detect potential turning points, and make more informed decisions.
2. Economic indicators: Caterpillar SSA has been used to analyze and forecast economic indicators, such as GDP, inflation, and unemployment rates. By decomposing these time series, researchers can better understand the underlying factors driving economic fluctuations and develop more accurate forecasting models.
3. Portfolio optimization: By applying Caterpillar SSA to financial time series data, portfolio managers can better understand the relationships between different assets and make more informed decisions regarding asset allocation and risk management.
Application in the Indicator:
In the given indicator, Caterpillar SSA is applied to a financial time series (price data) to smooth the series and detect significant trends or turning points. The method is used to decompose the price data into a set number of components, which are then combined to generate a smoothed signal. This signal can help traders and investors identify potential entry and exit points for their trades.
The indicator applies the Caterpillar SSA method by first constructing the trajectory matrix using the price data, then computing the singular value decomposition (SVD) of the matrix, and finally reconstructing the time series using a selected number of components. The reconstructed series serves as a smoothed version of the original price data, highlighting significant trends and turning points. The indicator can be customized by adjusting the lag, number of computations, and number of components used in the reconstruction process. By fine-tuning these parameters, traders and investors can optimize the indicator to better match their specific trading style and risk tolerance.
Caterpillar SSA is versatile and can be applied to various types of financial instruments, such as stocks, bonds, commodities, and currencies. It can also be combined with other technical analysis tools or indicators to create a comprehensive trading system. For example, a trader might use Caterpillar SSA to identify the primary trend in a market and then employ additional indicators, such as moving averages or RSI, to confirm the trend and generate trading signals.
In summary, Caterpillar SSA is a powerful time series analysis technique that has found applications in AI and machine learning, as well as financial markets and economics. By decomposing a time series into interpretable components, Caterpillar SSA enables better understanding of the underlying structure of the data, facilitating forecasting, smoothing, and anomaly detection. In the context of financial trading, the technique is used to analyze price data, detect significant trends or turning points, and inform trading decisions.
█ Input Parameters
This indicator takes several inputs that affect its signal output. These inputs can be classified into three categories: Basic Settings, UI Options, and Computation Parameters.
Source: This input represents the source of price data, which is typically the closing price of an asset. The user can select other price data, such as opening price, high price, or low price. The selected price data is then utilized in the Caterpillar SSA calculation process.
Lag: The lag input determines the window size used for the time series decomposition. A higher lag value implies that the SSA algorithm will consider a longer range of historical data when extracting the underlying trend and components. This parameter is crucial, as it directly impacts the resulting smoothed series and the quality of extracted components.
Number of Computations: This input, denoted as 'ncomp,' specifies the number of eigencomponents to be considered in the reconstruction of the time series. A smaller value results in a smoother output signal, while a higher value retains more details in the series, potentially capturing short-term fluctuations.
SSA Period Normalization: This input is used to normalize the SSA period, which adjusts the significance of each eigencomponent to the overall signal. It helps in making the algorithm adaptive to different timeframes and market conditions.
Number of Bars: This input specifies the number of bars to be processed by the algorithm. It controls the range of data used for calculations and directly affects the computation time and the output signal.
Number of Bars to Render: This input sets the number of bars to be plotted on the chart. A higher value slows down the computation but provides a more comprehensive view of the indicator's performance over a longer period. This value controls how far back the indicator is rendered.
Color bars: This boolean input determines whether the bars should be colored according to the signal's direction. If set to true, the bars are colored using the defined colors, which visually indicate the trend direction.
Show signals: This boolean input controls the display of buy and sell signals on the chart. If set to true, the indicator plots shapes (triangles) to represent long and short trade signals.
Static Computation Parameters:
The indicator also includes several internal parameters that affect the Caterpillar SSA algorithm, such as Maxncomp, MaxLag, and MaxArrayLength. These parameters set the maximum allowed values for the number of computations, the lag, and the array length, ensuring that the calculations remain within reasonable limits and do not consume excessive computational resources.
█ A Note on Endpionted, Non-repainting Indicators
An endpointed indicator is one that does not recalculate or repaint its past values based on new incoming data. In other words, the indicator's previous signals remain the same even as new price data is added. This is an important feature because it ensures that the signals generated by the indicator are reliable and accurate, even after the fact.
When an indicator is non-repainting or endpointed, it means that the trader can have confidence in the signals being generated, knowing that they will not change as new data comes in. This allows traders to make informed decisions based on historical signals, without the fear of the signals being invalidated in the future.
In the case of the Endpointed SSA of Price, this non-repainting property is particularly valuable because it allows traders to identify trend changes and reversals with a high degree of accuracy, which can be used to inform trading decisions. This can be especially important in volatile markets where quick decisions need to be made.
Bogdan Ciocoiu - Code runnerDescription
The Code Runner is a hybrid indicator that leverages other pre-configured, integrated open-source algorithms to help traders spot regular and continuation divergences.
The Code Runner specialises in integrating some of the most popular oscillators well known for their accuracy when scalping using divergence strategies.
Uniqueness
The Code Runner stands out as a one-stop-shop pack of oscillator algorithms that traders can further customise to spot divergences.
The indicator's uniqueness stands from its capability to recast each algorithm to apply to the same scale. This feature is achieved by manually adjusting the outputs of each algorithm to fit on a scale between +100 and -100.
Another benefit of the Code Runner comes from its standardisation of outputs, mainly consisting of lines. Showing lines enables traders to draw potential regular and continuation divergences quickly.
The indicator has been pre-configured to support scalping at 1-5 minutes.
Open-source
The Code Runner uses the following open-source scripts and algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
These algorithms are available in the public domain either in TradingView space or outside (given their popularity in the financial markets industry).
Adaptive Average Vortex Index [lastguru]As a longtime fan of ADX, looking at Vortex Indicator I often wondered, where is the third line. I have rarely seen that anybody is calculating it. So, here it is: Average Vortex Index - an ADX calculated from Vortex Indicator. I interpret it similarly to the ADX indicator: higher values show stronger trend. If you discover other interpretation or have suggestions, comments are welcome.
Both VI+ and VI- lines are also drawn. As I use adaptive length calculation in my other scripts (based on the libraries I've developed and published), I have also included the possibility to have an adaptive length here, so if you hate the idea of calculating ADX from VI, you can disable that line and just look at the adaptive Vortex Indicator.
Note that as with all my oscillators, all the lines here are renormalized to -1..1 range unlike the original Vortex Indicator computation. To do that for VI+ and VI- lines, I subtract 1 from their values. It does not change the shape or the amplitude of the lines.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers . I do not know, which combination works best, so you can experiment.
If no Adaptation is selected ( None option), you can set Length directly. If an Adaptation is selected, then Cycle multiplier can be set.
The oscillator also has the option to configure the internal smoothing function with Window setting. By default, RMA is used (like in ADX calculation). Fast Default option is using half the length for smoothing. Triangle , Hamming and Hann Window algorithms are some better smoothers suggested by John F. Ehlers.
After the oscillator a Moving Average can be applied. The following Moving Averages are included: SMA , RMA, EMA , HMA , VWMA , 2-pole Super Smoother, 3-pole Super Smoother, Filt11, Triangle Window, Hamming Window, Hann Window, Lowpass, DSSS.
Postfilter options are applied last:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic ) by John F. Ehlers
Inverse Fisher Transform - Inverse Fisher Transform
Noise Elimination Technology - a simplified Kendall correlation algorithm "Noise Elimination Technology" by John F. Ehlers
Momentum - momentum (derivative)
Except for Inverse Fisher Transform , all Postfilter algorithms can have Length parameter. If it is not specified (set to 0), then the calculated Slow MA Length is used. If Filter/MA Length is less than 2 or Postfilter Length is less than 1, they are calculated as a multiplier of the calculated oscillator length.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.
[Pandora] Vast Volatility Treasure TroveINTRODUCTION:
Volatility enthusiasts, prepare for VICTORY on this day of July 4th, 2024! This is my "Vast Volatility Treasure Trove," intended mostly for educational purposes, yet these functions will also exhibit versatility when combined with other algorithms to garner statistical excellence. Once again, I am now ripping the lid off of Pandora's box... of volatility. Inside this script is a 'vast' collection of volatility estimators, reflecting the indicators name. Whether you are a seasoned trader destined to navigate financial strife or an eagerly curious learner, this script offers a comprehensive toolkit for a broad spectrum of volatility analysis. Enjoy your journey through the realm of market volatility with this code!
WHAT IS MARKET VOLATILITY?:
Market volatility refers to various fluctuations in the value of a financial market or asset over a period of time, often characterized by occasional rapid and significant deviations in price. During periods of greater market volatility, evolving conditions of prices can move rapidly in either direction, creating uncertainty for investors with results of sharp declines as well as rapid gains. However, market volatility is a typical aspect expected in financial markets that can also present opportunities for informed decision-making and potential benefits from the price flux.
SCRIPT INTENTION:
Volatility is assuredly omnipresent, waxing and waning in magnitude, and some readers have every intention of studying and/or measuring it. This script serves as an all-in-one armada of volatility estimators for TradingView members. I set out to provide a diverse set of tools to analyze and interpret market volatility, offering volatile insights, and aid with the development of robust trading indicators and strategies.
In today's fast-paced financial markets, understanding and quantifying volatility is informative for both seasoned traders and novice investors. This script is designed to empower users by equipping them with a comprehensive suite of volatility estimators. Each function within this script has been meticulously crafted to address various aspects of volatility, from traditional methods like Garman-Klass and Parkinson to more advanced techniques like Yang-Zhang and my custom experimental algorithms.
Ultimately, this script is more than just a collection of functions. It is a gateway to a deeper understanding of market volatility and a valuable resource for anyone committed to mastering the complexities of financial markets.
SCRIPT CONTENTS:
This script includes a variety of functions designed to measure and analyze market volatility. Where applicable, an input checkbox option provides an unbiased/biased estimate. Below is a brief description of each function in the original order they appear as code upon first publish:
Parkinson Volatility - Estimates volatility emphasizing the high and low range movements.
Alternate Parkinson Volatility - Simpler version of the original Parkinson Volatility that I realized.
Garman-Klass Volatility - Estimates volatility based on high, low, open, and close prices using a formula that adjusts for biases in price dynamics.
Rogers-Satchell-Yoon Volatility #1 - Estimates volatility based on logarithmic differences between high, low, open, and close values.
Rogers-Satchell-Yoon Volatility #2 - Similar estimate to Rogers-Satchell with the same result via an alternate formulation of volatility.
Yang-Zhang Volatility - An advanced volatility estimate combining both strengths of the Garman-Klass and Rogers-Satchell estimators, with weights determined by an alpha parameter.
Yang-Zhang (Modified) Volatility - My experimental modification slightly different from the Yang-Zhang formula with improved computational efficiency.
Selectable Volatility - Basic customizable volatility calculation based on the logarithmic difference between selected numerator and denominator prices (e.g., open, high, low, close).
Close-to-Close Volatility - Estimates volatility using the logarithmic difference between consecutive closing prices. Specifically applicable to data sources without open, high, and low prices.
Open-to-Close Volatility - (Overnight Volatility): Estimates volatility based on the logarithmic difference between the opening price and the last closing price emphasizing overnight gaps.
Hilo Volatility - Estimates volatility using a method similar to Parkinson's method, which considers the logarithm of the high and low prices.
Vantage Volatility - My experimental custom 'vantage' method to estimate volatility similar to Yang-Zhang, which incorporates various factors (Alpha, Beta, Gamma) to generate a weighted logarithmic calculation. This may be a volatility advantage or disadvantage, hence it's name.
Schwert Volatility - Estimates volatility based on arithmetic returns.
Historical Volatility - Estimates volatility considering logarithmic returns.
Annualized Historical Volatility - Estimates annualized volatility using logarithmic returns, adjusted for the number of trading days in a year.
If I omitted any other known varieties, detailed requests for future consideration can be made below for their inclusion into this script within future versions...
BONUS ALGORITHMS:
This script also includes several experimental and bonus functions that push the boundaries of volatility analysis as I understand it. These functions are designed to provide additional insights and also are my ideal notions for traders looking to explore other methods of volatility measurement.
VOLATILITY APPLICATIONS:
Volatility estimators serve a common role across various facets of trading and financial analysis, offering insights into market behavior. These tools are already in instrumental with enhancing risk management practices by providing a deeper understanding of market dynamics and the inherent uncertainty in asset prices. With volatility estimators, traders can effectively quantifying market risk and adjust their strategies accordingly, optimizing portfolio performance and mitigating potential losses. Additionally, volatility estimations may serve as indication for detecting overbought or oversold market conditions, offering probabilistic insights that could inform strategic decisions at turning points. This script
distinctly offers a variety of volatility estimators to navigate intricate financial terrains with informed judgment to address challenges of strategic planning.
CODE REUSE:
You don't have to ask for my permission to use/reuse these functions in your published scripts, simply because I have better things to do than answer requests for the reuse of these functions.
Notice: Unfortunately, I will not provide any integration support into member's projects at all. I have my own projects that require way too much of my day already.
Volume ForecastThe Volume Forecast indicator on TradingView is a comprehensive tool designed to analyze historical price action and project future market movements based on the average sizes of candles. Incorporating various data points such as candle high/low, open/close, and real volumes, Volume Forecast provides traders with a holistic view of market dynamics, allowing for more informed decision-making.
Key Features:
Multi-Data Source Analysis:
Volume Forecast seamlessly integrates multiple data sources, including candle high/low, open/close prices, and real volumes. By considering these diverse elements, the indicator offers a nuanced understanding of market conditions.
Historical Candle Size Analysis:
The indicator conducts a thorough analysis of historical candle sizes, capturing key data points to calculate the average candle size over a specified period. This historical context serves as the foundation for forecasting future candle sizes.
Customizable Forecasting Parameters:
Traders have the flexibility to fine-tune forecasting parameters to align with their trading strategies. Whether focusing on open/close relationships, high/low points, or real volumes, users can customize the indicator to suit their preferences.
Predictive Algorithm:
Volume Forecast employs a sophisticated predictive algorithm that leverages historical candle size data to project the potential size of upcoming candles. This algorithmic approach enhances the indicator's accuracy in forecasting market movements.
Visual Clarity:
The indicator provides a clear visual representation on the TradingView chart, displaying historical candle sizes and forecasted values. Color-coded elements and visual cues help traders quickly interpret the data, facilitating timely decision-making.
Adaptive Real-Time Updates:
Volume Forecast dynamically updates in real-time, ensuring traders have access to the latest information. This adaptability allows for swift adjustments to trading strategies in response to changing market conditions.
Comprehensive Market Compatibility:
Whether trading stocks, forex, cryptocurrencies, or commodities, Volume Forecast is compatible across various financial instruments and timeframes. This versatility makes it a valuable asset for traders in different markets.
User-Friendly Interface:
With an intuitive interface, Volume Forecast is accessible to traders of all experience levels. The indicator's user-friendly design streamlines the analysis process, making it easier for traders to incorporate it into their trading routines.
In summary, Volume Forecast is a robust TradingView indicator that combines historical candle size analysis with advanced forecasting techniques. By incorporating multiple data sources and offering customization options, it empowers traders to make more informed decisions in anticipation of market movements. Whether used independently or in conjunction with other tools, Volume Forecast is a valuable asset for traders seeking a comprehensive understanding of market dynamics.
Bogdan Ciocoiu - LitigatorDescription
The Litigator is an indicator that encapsulates the value delivered by the Relative Strength Index, Ultimate Oscillator, Stochastic and Money Flow Index algorithms to produce signals enabling users to enter positions in ideal market conditions. The Litigator integrates the value delivered by the above four algorithms into one script.
This indicator is handy when trading continuation/reversal divergence strategies in conjunction with price action.
Uniqueness
The Litigator's uniqueness stands from integrating the above algorithms into the same visual area and leveraging preconfigured parameters suitable for short term scalping (1-5 minutes).
In addition, the Litigator allows configuring the above four algorithms in such a way to coordinate signals by colour-coding or shape thickness to aid the user with identifying any emerging patterns quicker.
Furthermore, Moonshot's uniqueness is also reflected in the way it has standardised the outputs of each algorithm to look and feel the same, and in doing so, enabling users to plug them in/out as needed. This also includes ensuring the ratios of the shapes are similar (applicable to the same scale).
Open-source
The indicator uses the following open-source scripts/algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
Bogdan Ciocoiu - MoonshotDescription
Moonshot is an indicator that encapsulates the value delivered by the TSI, MACD, Awesome Oscillator and CCI algorithms to produce signals to enable users to enter positions in ideal market conditions. Moonshot integrates the value delivered by the above four algorithms into one script.
This indicator is particularly useful when trading continuation/reversal divergence strategies.
Uniqueness
The Moonshot's uniqueness stands from integrating the above algorithms into the same visual area and leveraging preconfigured parameters suitable for 1-3 minute scalping techniques.
In addition, Moonshot allows swapping or furthermore configuring the above four algorithms in such a way to align signals by colour-coding or shape thickness to aid the users with identifying any emerging patterns quicker.
Furthermore, Moonshot's uniqueness is also reflected in the way it has standardised the outputs of each algorithm to look and feel the same (including the scale at which the shapes are shown) and, in doing so, enables users to plug them in/out as needed.
Open-source
The indicator leverages the following open-source scripts/algorithms:
www.tradingview.com
www.tradingview.com
www.tradingview.com
www.tradingview.com
LengthAdaptationCollection of dynamic length adaptation algorithms. Mostly from various Adaptive Moving Averages (they are usually just EMA otherwise). Now you can combine Adaptations with any other Moving Averages or Oscillators (see my other libraries), to get something like Deviation Scaled RSI or Fractal Adaptive VWMA. This collection is not encyclopaedic. Suggestions are welcome.
chande(src, len, sdlen, smooth, power) Chande's Dynamic Length
Parameters:
src : Series to use
len : Reference lookback length
sdlen : Lookback length of Standard deviation
smooth : Smoothing length of Standard deviation
power : Exponent of the length adaptation (lower is smaller variation)
Returns: Calculated period
Taken from Chande's Dynamic Momentum Index (CDMI or DYMOI), which is dynamic RSI with this length
Original default power value is 1, but I use 0.5
A variant of this algorithm is also included, where volume is used instead of price
vidya(src, len, dynLow) Variable Index Dynamic Average Indicator (VIDYA)
Parameters:
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
Returns: Calculated period
Standard VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
I took the adaptation part, as it is just an EMA otherwise
vidyaRS(src, len, dynHigh) Relative Strength Dynamic Length - VIDYA RS
Parameters:
src : Series to use
len : Reference lookback length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on Vitali Apirine's modification (Stocks and Commodities, January 2022) of VIDYA algorithm. The period oscillates from the Upper Bound down (fast)
I took the adaptation part, as it is just an EMA otherwise
kaufman(src, len, dynLow, dynHigh) Kaufman Efficiency Scaling
Parameters:
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on Efficiency Ratio calculation orifinally used in Kaufman Adaptive Moving Average developed by Perry J. Kaufman
I took the adaptation part, as it is just an EMA otherwise
ds(src, len) Deviation Scaling
Parameters:
src : Series to use
len : Reference lookback length
Returns: Calculated period
Based on Derivation Scaled Super Smoother (DSSS) by John F. Ehlers
Originally used with Super Smoother
RMS originally has 50 bar lookback. Changed to 4x length for better flexibility. Could be wrong.
maa(src, len, threshold) Median Average Adaptation
Parameters:
src : Series to use
len : Reference lookback length
threshold : Adjustment threshold (lower is smaller length, default: 0.002, min: 0.0001)
Returns: Calculated period
Based on Median Average Adaptive Filter by John F. Ehlers
Discovered and implemented by @cheatcountry:
I took the adaptation part, as it is just an EMA otherwise
fra(len, fc, sc) Fractal Adaptation
Parameters:
len : Reference lookback length
fc : Fast constant (default: 1)
sc : Slow constant (default: 200)
Returns: Calculated period
Based on FRAMA by John F. Ehlers
Modified to allow lower and upper bounds by an unknown author
I took the adaptation part, as it is just an EMA otherwise
mama(src, dynLow, dynHigh) MESA Adaptation - MAMA Alpha
Parameters:
src : Series to use
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
Returns: Calculated period
Based on MESA Adaptive Moving Average by John F. Ehlers
Introduced in the September 2001 issue of Stocks and Commodities
Inspired by the @everget implementation:
I took the adaptation part, as it is just an EMA otherwise
doAdapt(type, src, len, dynLow, dynHigh, chandeSDLen, chandeSmooth, chandePower) Execute a particular Length Adaptation from the list
Parameters:
type : Length Adaptation type to use
src : Series to use
len : Reference lookback length
dynLow : Lower bound for the dynamic length
dynHigh : Upper bound for the dynamic length
chandeSDLen : Lookback length of Standard deviation for Chande's Dynamic Length
chandeSmooth : Smoothing length of Standard deviation for Chande's Dynamic Length
chandePower : Exponent of the length adaptation for Chande's Dynamic Length (lower is smaller variation)
Returns: Calculated period (float, not limited)
doMA(type, src, len) MA wrapper on wrapper: if DSSS is selected, calculate it here
Parameters:
type : MA type to use
src : Series to use
len : Filtering length
Returns: Filtered series
Demonstration of a combined indicator: Deviation Scaled Super Smoother
DMI + HMA - No Risk ManagementDMI (Directional Movement Index) and HMA (Hull Moving Average)
The DMI and HMA make a great combination, The DMI will gauge the market direction, while the HMA will add confirmation to the trend strength.
What is the DMI?
The DMI is an indicator that was developed by J. Welles Wilder in 1978. The Indicator was designed to identify in which direction the price is moving. This is done by comparing previous highs and lows and drawing 2 lines.
1. A Positive movement line
2. A Negative movement line
A third line can be added, which would be known as the ADX line or Average Directional Index. This can also be used to gauge the strength in which direction the market is moving.
When the Positive movement line (DI+) is above the Negative movement line (DI-) there is more upward pressure. Ofcourse visa versa, when the DI- is above the DI+ that would indicate more downwards pressure.
Want to know more about HMA? Check out one of our other published scripts
What is this strategy doing?
We are first waiting for the DMI to cross in our favoured direction, after that, we wait for the HMA to signal the entry. Without both conditions being true, no trade will be made.
Long Entries
1. DI+ crosses above DI-
2. HMA line 1 is above HMA line 2
Short Entries
1. DI- Crosses above DI+
2. HMA line 1 is below HMA lilne 2
Its as simple as that.
Conclusion
While this strategy does have its downsides, that can be reduced by adding some risk manegment into the script. In general the trade profitability is above average, And the max drawdown is at a minimum.
The settings have been optimised to suite BTCUSDT PERP markets. Though with small adjustments it can be used on many assets!
SMU Stock ThermometerThis script shows various technical indicators in a stacked vertical candle called Market Termometer.
It helps to see the price action in one single vertical column where the actual price moves up or down. So you can see the price change based on your custom setting levels.
I've been studying ALGO for over a year and made many live experiment trades long and shorts. So, I'm trying to find a way to see what is ALGos next move. If it sounds far-fetch, then you should see my other published scripts.
Here is example of how ALGo dance around old indicators, which is why I started creating a bunch of new indicators that ALGO doesn't know
Example:
Impact-driven-algorithm= Large volume masked as small volume to keep the price at desired level. So, your chart says overbought but market doesn't drop for days
Cost-driven-algorithm= Hedge fund buy every time at lower price and prevent others to buy low, moving up fast. Is like a clock with millisecond timing and ALGO owners know when to buy low and when to sell high
If you have a good idea, let me know so i can include it the future versions.
Enjoy and think outside the box, the only way to beat the ALGO
RSI with Swing Trade by Kelvin_VAlgorithm Description: "RSI with Swing Trade by Kelvin_V"
1. Introduction:
This algorithm uses the RSI (Relative Strength Index) and optional Moving Averages (MA) to detect potential uptrends and downtrends in the market. The key feature of this script is that it visually changes the candle colors based on the market conditions, making it easier for users to identify potential trend swings or wave patterns.
The strategy offers flexibility by allowing users to enable or disable the MA condition. When the MA condition is enabled, the strategy will confirm trends using two moving averages. When disabled, the strategy will only use RSI to detect potential market swings.
2. Key Features of the Algorithm:
RSI (Relative Strength Index):
The RSI is used to identify potential market turning points based on overbought and oversold conditions.
When the RSI exceeds a predefined upper threshold (e.g., 60), it suggests a potential uptrend.
When the RSI drops below a lower threshold (e.g., 40), it suggests a potential downtrend.
Moving Averages (MA) - Optional:
Two Moving Averages (Short MA and Long MA) are used to confirm trends.
If the Short MA crosses above the Long MA, it indicates an uptrend.
If the Short MA crosses below the Long MA, it indicates a downtrend.
Users have the option to enable or disable this MA condition.
Visual Candle Coloring:
Green candles represent a potential uptrend, indicating a bullish move based on RSI (and MA if enabled).
Red candles represent a potential downtrend, indicating a bearish move based on RSI (and MA if enabled).
3. How the Algorithm Works:
RSI Levels:
The user can set RSI upper and lower bands to represent potential overbought and oversold levels. For example:
RSI > 60: Indicates a potential uptrend (bullish move).
RSI < 40: Indicates a potential downtrend (bearish move).
Optional MA Condition:
The algorithm also allows the user to apply the MA condition to further confirm the trend:
Short MA > Long MA: Confirms an uptrend, reinforcing a bullish signal.
Short MA < Long MA: Confirms a downtrend, reinforcing a bearish signal.
This condition can be disabled, allowing the user to focus solely on RSI signals if desired.
Swing Trade Logic:
Uptrend: If the RSI exceeds the upper threshold (e.g., 60) and (optionally) the Short MA is above the Long MA, the candles will turn green to signal a potential uptrend.
Downtrend: If the RSI falls below the lower threshold (e.g., 40) and (optionally) the Short MA is below the Long MA, the candles will turn red to signal a potential downtrend.
Visual Representation:
The candle colors change dynamically based on the RSI values and moving average conditions, making it easier for traders to visually identify potential trend swings or wave patterns without relying on complex chart analysis.
4. User Customization:
The algorithm provides multiple customization options:
RSI Length: Users can adjust the period for RSI calculation (default is 4).
RSI Upper Band (Potential Uptrend): Users can customize the upper RSI level (default is 60) to indicate a potential bullish move.
RSI Lower Band (Potential Downtrend): Users can customize the lower RSI level (default is 40) to indicate a potential bearish move.
MA Type: Users can choose between SMA (Simple Moving Average) and EMA (Exponential Moving Average) for moving average calculations.
Enable/Disable MA Condition: Users can toggle the MA condition on or off, depending on whether they want to add moving averages to the trend confirmation process.
5. Benefits of the Algorithm:
Easy Identification of Trends: By changing candle colors based on RSI and MA conditions, the algorithm makes it easy for users to visually detect potential trend reversals and trend swings.
Flexible Conditions: The user has full control over the RSI and MA settings, allowing them to adapt the strategy to different market conditions and timeframes.
Clear Visualization: With the candle color changes, users can quickly recognize when a potential uptrend or downtrend is forming, enabling faster decision-making in their trading.
6. Example Usage:
Day traders: Can apply this strategy on short timeframes such as 5 minutes or 15 minutes to detect quick trends or reversals.
Swing traders: Can use this strategy on longer timeframes like 1 hour or 4 hours to identify and follow larger market swings.
Intramarket Difference Index StrategyHi Traders !!
The IDI Strategy:
In layman’s terms this strategy compares two indicators across markets and exploits their differences.
note: it is best the two markets are correlated as then we know we are trading a short to long term deviation from both markets' general trend with the assumption both markets will trend again sometime in the future thereby exhausting our trading opportunity.
📍 Import Notes:
This Strategy calculates trade position size independently (i.e. risk per trade is controlled in the user inputs tab), this means that the ‘Order size’ input in the ‘Properties’ tab will have no effect on the strategy. Why ? because this allows us to define custom position size algorithms which we can use to improve our risk management and equity growth over time. Here we have the option to have fixed quantity or fixed percentage of equity ATR (Average True Range) based stops in addition to the turtle trading position size algorithm.
‘Pyramiding’ does not work for this strategy’, similar to the order size input togeling this input will have no effect on the strategy as the strategy explicitly defines the maximum order size to be 1.
This strategy is not perfect, and as of writing of this post I have not traded this algo.
Always take your time to backtests and debug the strategy.
🔷 The IDI Strategy:
By default this strategy pulls data from your current TV chart and then compares it to the base market, be default BINANCE:BTCUSD . The strategy pulls SMA and RSI data from either market (we call this the difference data), standardizes the data (solving the different unit problem across markets) such that it is comparable and then differentiates the data, calling the result of this transformation and difference the Intramarket Difference (ID). The formula for the the ID is
ID = market1_diff_data - market2_diff_data (1)
Where
market(i)_diff_data = diff_data / ATR(j)_market(i)^0.5,
where i = {1, 2} and j = the natural numbers excluding 0
Formula (1) interpretation is the following
When ID > 0: this means the current market outperforms the base market
When ID = 0: Markets are at long run equilibrium
When ID < 0: this means the current market underperforms the base market
To form the strategy we define one of two strategy type’s which are Trend and Mean Revesion respectively.
🔸 Trend Case:
Given the ‘‘Strategy Type’’ is equal to TREND we define a threshold for which if the ID crosses over we go long and if the ID crosses under the negative of the threshold we go short.
The motivating idea is that the ID is an indicator of the two symbols being out of sync, and given we know volatility clustering, momentum and mean reversion of anomalies to be a stylised fact of financial data we can construct a trading premise. Let's first talk more about this premise.
For some markets (cryptocurrency markets - synthetic symbols in TV) the stylised fact of momentum is true, this means that higher momentum is followed by higher momentum, and given we know momentum to be a vector quantity (with magnitude and direction) this momentum can be both positive and negative i.e. when the ID crosses above some threshold we make an assumption it will continue in that direction for some time before executing back to its long run equilibrium of 0 which is a reasonable assumption to make if the market are correlated. For example for the BTCUSD - ETHUSD pair, if the ID > +threshold (inputs for MA and RSI based ID thresholds are found under the ‘‘INTRAMARKET DIFFERENCE INDEX’’ group’), ETHUSD outperforms BTCUSD, we assume the momentum to continue so we go long ETHUSD.
In the standard case we would exit the market when the IDI returns to its long run equilibrium of 0 (for the positive case the ID may return to 0 because ETH’s difference data may have decreased or BTC’s difference data may have increased). However in this strategy we will not define this as our exit condition, why ?
This is because we want to ‘‘let our winners run’’, to achieve this we define a trailing Donchian Channel stop loss (along with a fixed ATR based stop as our volatility proxy). If we were too use the 0 exit the strategy may print a buy signal (ID > +threshold in the simple case, market regimes may be used), return to 0 and then print another buy signal, and this process can loop may times, this high trade frequency means we fail capture the entire market move lowering our profit, furthermore on lower time frames this high trade frequencies mean we pay more transaction costs (due to price slippage, commission and big-ask spread) which means less profit.
By capturing the sum of many momentum moves we are essentially following the trend hence the trend following strategy type.
Here we also print the IDI (with default strategy settings with the MA difference type), we can see that by letting our winners run we may catch many valid momentum moves, that results in a larger final pnl that if we would otherwise exit based on the equilibrium condition(Valid trades are denoted by solid green and red arrows respectively and all other valid trades which occur within the original signal are light green and red small arrows).
another example...
Note: if you would like to plot the IDI separately copy and paste the following code in a new Pine Script indicator template.
indicator("IDI")
// INTRAMARKET INDEX
var string g_idi = "intramarket diffirence index"
ui_index_1 = input.symbol("BINANCE:BTCUSD", title = "Base market", group = g_idi)
// ui_index_2 = input.symbol("BINANCE:ETHUSD", title = "Quote Market", group = g_idi)
type = input.string("MA", title = "Differrencing Series", options = , group = g_idi)
ui_ma_lkb = input.int(24, title = "lookback of ma and volatility scaling constant", group = g_idi)
ui_rsi_lkb = input.int(14, title = "Lookback of RSI", group = g_idi)
ui_atr_lkb = input.int(300, title = "ATR lookback - Normalising value", group = g_idi)
ui_ma_threshold = input.float(5, title = "Threshold of Upward/Downward Trend (MA)", group = g_idi)
ui_rsi_threshold = input.float(20, title = "Threshold of Upward/Downward Trend (RSI)", group = g_idi)
//>>+----------------------------------------------------------------+}
// CUSTOM FUNCTIONS |
//<<+----------------------------------------------------------------+{
// construct UDT (User defined type) containing the IDI (Intramarket Difference Index) source values
// UDT will hold many variables / functions grouped under the UDT
type functions
float Close // close price
float ma // ma of symbol
float rsi // rsi of the asset
float atr // atr of the asset
// the security data
getUDTdata(symbol, malookback, rsilookback, atrlookback) =>
indexHighTF = barstate.isrealtime ? 1 : 0
= request.security(symbol, timeframe = timeframe.period,
expression = [close , // Instentiate UDT variables
ta.sma(close, malookback) ,
ta.rsi(close, rsilookback) ,
ta.atr(atrlookback) ])
data = functions.new(close_, ma_, rsi_, atr_)
data
// Intramerket Difference Index
idi(type, symbol1, malookback, rsilookback, atrlookback, mathreshold, rsithreshold) =>
threshold = float(na)
index1 = getUDTdata(symbol1, malookback, rsilookback, atrlookback)
index2 = getUDTdata(syminfo.tickerid, malookback, rsilookback, atrlookback)
// declare difference variables for both base and quote symbols, conditional on which difference type is selected
var diffindex1 = 0.0, var diffindex2 = 0.0,
// declare Intramarket Difference Index based on series type, note
// if > 0, index 2 outpreforms index 1, buy index 2 (momentum based) until equalibrium
// if < 0, index 2 underpreforms index 1, sell index 1 (momentum based) until equalibrium
// for idi to be valid both series must be stationary and normalised so both series hae he same scale
intramarket_difference = 0.0
if type == "MA"
threshold := mathreshold
diffindex1 := (index1.Close - index1.ma) / math.pow(index1.atr*malookback, 0.5)
diffindex2 := (index2.Close - index2.ma) / math.pow(index2.atr*malookback, 0.5)
intramarket_difference := diffindex2 - diffindex1
else if type == "RSI"
threshold := rsilookback
diffindex1 := index1.rsi
diffindex2 := index2.rsi
intramarket_difference := diffindex2 - diffindex1
//>>+----------------------------------------------------------------+}
// STRATEGY FUNCTIONS CALLS |
//<<+----------------------------------------------------------------+{
// plot the intramarket difference
= idi(type,
ui_index_1,
ui_ma_lkb,
ui_rsi_lkb,
ui_atr_lkb,
ui_ma_threshold,
ui_rsi_threshold)
//>>+----------------------------------------------------------------+}
plot(intramarket_difference, color = color.orange)
hline(type == "MA" ? ui_ma_threshold : ui_rsi_threshold, color = color.green)
hline(type == "MA" ? -ui_ma_threshold : -ui_rsi_threshold, color = color.red)
hline(0)
Note it is possible that after printing a buy the strategy then prints many sell signals before returning to a buy, which again has the same implication (less profit. Potentially because we exit early only for price to continue upwards hence missing the larger "trend"). The image below showcases this cenario and again, by allowing our winner to run we may capture more profit (theoretically).
This should be clear...
🔸 Mean Reversion Case:
We stated prior that mean reversion of anomalies is an standerdies fact of financial data, how can we exploit this ?
We exploit this by normalizing the ID by applying the Ehlers fisher transformation. The transformed data is then assumed to be approximately normally distributed. To form the strategy we employ the same logic as for the z score, if the FT normalized ID > 2.5 (< -2.5) we buy (short). Our exit conditions remain unchanged (fixed ATR stop and trailing Donchian Trailing stop)
🔷 Position Sizing:
If ‘‘Fixed Risk From Initial Balance’’ is toggled true this means we risk a fixed percentage of our initial balance, if false we risk a fixed percentage of our equity (current balance).
Note we also employ a volatility adjusted position sizing formula, the turtle training method which is defined as follows.
Turtle position size = (1/ r * ATR * DV) * C
Where,
r = risk factor coefficient (default is 20)
ATR(j) = risk proxy, over j times steps
DV = Dollar Volatility, where DV = (1/Asset Price) * Capital at Risk
🔷 Risk Management:
Correct money management means we can limit risk and increase reward (theoretically). Here we employ
Max loss and gain per day
Max loss per trade
Max number of consecutive losing trades until trade skip
To read more see the tooltips (info circle).
🔷 Take Profit:
By defualt the script uses a Donchain Channel as a trailing stop and take profit, In addition to this the script defines a fixed ATR stop losses (by defualt, this covers cases where the DC range may be to wide making a fixed ATR stop usefull), ATR take profits however are defined but optional.
ATR SL and TP defined for all trades
🔷 Hurst Regime (Regime Filter):
The Hurst Exponent (H) aims to segment the market into three different states, Trending (H > 0.5), Random Geometric Brownian Motion (H = 0.5) and Mean Reverting / Contrarian (H < 0.5). In my interpretation this can be used as a trend filter that eliminates market noise.
We utilize the trending and mean reverting based states, as extra conditions required for valid trades for both strategy types respectively, in the process increasing our trade entry quality.
🔷 Example model Architecture:
Here is an example of one configuration of this strategy, combining all aspects discussed in this post.
Future Updates
- Automation integration (next update)
AI SuperTrend x Pivot Percentile - Strategy [PresentTrading]█ Introduction and How it is Different
The AI SuperTrend x Pivot Percentile strategy is a sophisticated trading approach that integrates AI-driven analysis with traditional technical indicators. Combining the AI SuperTrend with the Pivot Percentile strategy highlights several key advantages:
1. Enhanced Accuracy in Trend Prediction: The AI SuperTrend utilizes K-Nearest Neighbors (KNN) algorithm for trend prediction, improving accuracy by considering historical data patterns. This is complemented by the Pivot Percentile analysis which provides additional context on trend strength.
2. Comprehensive Market Analysis: The integration offers a multi-faceted approach to market analysis, combining AI insights with traditional technical indicators. This dual approach captures a broader range of market dynamics.
BTC 6H L/S Performance
Local
█ Strategy: How it Works - Detailed Explanation
🔶 AI-Enhanced SuperTrend Indicators
1. SuperTrend Calculation:
- The SuperTrend indicator is calculated using a moving average and the Average True Range (ATR). The basic formula is:
- Upper Band = Moving Average + (Multiplier × ATR)
- Lower Band = Moving Average - (Multiplier × ATR)
- The moving average type (SMA, EMA, WMA, RMA, VWMA) and the length of the moving average and ATR are adjustable parameters.
- The direction of the trend is determined based on the position of the closing price in relation to these bands.
2. AI Integration with K-Nearest Neighbors (KNN):
- The KNN algorithm is applied to predict trend direction. It uses historical price data and SuperTrend values to classify the current trend as bullish or bearish.
- The algorithm calculates the 'distance' between the current data point and historical points. The 'k' nearest data points (neighbors) are identified based on this distance.
- A weighted average of these neighbors' trends (bullish or bearish) is calculated to predict the current trend.
For more please check: Multi-TF AI SuperTrend with ADX - Strategy
🔶 Pivot Percentile Analysis
1. Percentile Calculation:
- This involves calculating the percentile ranks for high and low prices over a set of predefined lengths.
- The percentile function is typically defined as:
- Percentile = Value at (P/100) × (N + 1)th position
- Where P is the desired percentile, and N is the number of data points.
2. Trend Strength Evaluation:
- The calculated percentiles for highs and lows are used to determine the strength of bullish and bearish trends.
- For instance, a high percentile rank in the high prices may indicate a strong bullish trend, and vice versa for bearish trends.
For more please check: Pivot Percentile Trend - Strategy
🔶 Strategy Integration
1. Combining SuperTrend and Pivot Percentile:
- The strategy synthesizes the insights from both AI-enhanced SuperTrend and Pivot Percentile analysis.
- It compares the trend direction indicated by the SuperTrend with the strength of the trend as suggested by the Pivot Percentile analysis.
2. Signal Generation:
- A trading signal is generated when both the AI-enhanced SuperTrend and the Pivot Percentile analysis agree on the trend direction.
- For instance, a bullish signal is generated when both the SuperTrend is bullish, and the Pivot Percentile analysis shows strength in bullish trends.
🔶 Risk Management and Filters
- ADX and DMI Filter: The strategy uses the Average Directional Index (ADX) and the Directional Movement Index (DMI) as filters to assess the trend's strength and direction.
- Dynamic Trailing Stop Loss: Based on the SuperTrend indicator, the strategy dynamically adjusts stop-loss levels to manage risk effectively.
This strategy stands out for its ability to combine real-time AI analysis with established technical indicators, offering traders a nuanced and responsive tool for navigating complex market conditions. The equations and algorithms involved are pivotal in accurately identifying market trends and potential trade opportunities.
█ Usage
To effectively use this strategy, traders should:
1. Understand the AI and Pivot Percentile Indicators: A clear grasp of how these indicators work will enable traders to make informed decisions.
2. Interpret the Signals Accurately: The strategy provides bullish, bearish, and neutral signals. Traders should align these signals with their market analysis and trading goals.
3. Monitor Market Conditions: Given that this strategy is sensitive to market dynamics, continuous monitoring is crucial for timely decision-making.
4. Adjust Settings as Needed: Traders should feel free to tweak the input parameters to suit their trading preferences and to respond to changing market conditions.
█Default Settings and Their Impact on Performance
1. Trading Direction (Default: "Both")
Effect: Determines whether the strategy will take long positions, short positions, or both. Adjusting this setting can align the strategy with the trader's market outlook or risk preference.
2. AI Settings (Neighbors: 3, Data Points: 24)
Neighbors: The number of nearest neighbors in the KNN algorithm. A higher number might smooth out noise but could miss subtle, recent changes. A lower number makes the model more sensitive to recent data but may increase noise.
Data Points: Defines the amount of historical data considered. More data points provide a broader context but may dilute recent trends' impact.
3. SuperTrend Settings (Length: 10, Factor: 3.0, MA Source: "WMA")
Length: Affects the sensitivity of the SuperTrend indicator. A longer length results in a smoother, less sensitive indicator, ideal for long-term trends.
Factor: Determines the bandwidth of the SuperTrend. A higher factor creates wider bands, capturing larger price movements but potentially missing short-term signals.
MA Source: The type of moving average used (e.g., WMA - Weighted Moving Average). Different MA types can affect the trend indicator's responsiveness and smoothness.
4. AI Trend Prediction Settings (Price Trend: 10, Prediction Trend: 80)
Price Trend and Prediction Trend Lengths: These settings define the lengths of weighted moving averages for price and SuperTrend, impacting the responsiveness and smoothness of the AI's trend predictions.
5. Pivot Percentile Settings (Length: 10)
Length: Influences the calculation of pivot percentiles. A shorter length makes the percentile more responsive to recent price changes, while a longer length offers a broader view of price trends.
6. ADX and DMI Settings (ADX Length: 14, Time Frame: 'D')
ADX Length: Defines the period for the Average Directional Index calculation. A longer period results in a smoother ADX line.
Time Frame: Sets the time frame for the ADX and DMI calculations, affecting the sensitivity to market changes.
7. Commission, Slippage, and Initial Capital
These settings relate to transaction costs and initial investment, directly impacting net profitability and strategy feasibility.
HolidayLibrary "Holiday"
- Full Control over Holidays and Daylight Savings Time (DLS)
The Holiday Library is an essential tool for traders and analysts who engage in backtesting and live trading . This comprehensive library enables the incorporation of crucial calendar elements - specifically Daylight Savings Time (DLS) adjustments and public holidays - into trading strategies and backtesting environments.
Key Features:
- DLS Adjustments: The library takes into account the shifts in time due to Daylight Savings. This feature is particularly vital for backtesting strategies, as DLS can impact trading hours, which in turn affects the volatility and liquidity in the market. Accurate DLS adjustments ensure that backtesting scenarios are as close to real-life conditions as possible.
- Comprehensive Holiday Metadata: The library includes a rich set of holiday metadata, allowing for the detailed scheduling of trading activities around public holidays. This feature is crucial for avoiding skewed results in backtesting, where holiday trading sessions might differ significantly in terms of volume and price movement.
- Customizable Holiday Schedules: Users can add or remove specific holidays, tailoring the library to fit various regional market schedules or specific trading requirements.
- Visualization Aids: The library supports on-chart labels, making it visually intuitive to identify holidays and DLS shifts directly on trading charts.
Use Cases:
1. Strategy Development: When developing trading strategies, it’s important to account for non-trading days and altered trading hours due to holidays and DLS. This library enables a realistic and accurate representation of these factors.
2. Risk Management: Trading around holidays can be riskier due to thinner liquidity and greater volatility. By integrating holiday data, traders can better manage their risk exposure.
3. Backtesting Accuracy: For backtesting to be effective, it must simulate the actual market conditions as closely as possible. Incorporating holidays and DLS adjustments contributes to more reliable and realistic backtesting results.
4. Global Trading: For traders active in multiple global markets, this library provides an easy way to handle different holiday schedules and DLS shifts across regions.
The Holiday Library is a versatile tool that enhances the precision and realism of trading simulations and strategy development . Its integration into the trading workflow is straightforward and beneficial for both novice and experienced traders.
EasterAlgo(_year)
Calculates the date of Easter Sunday for a given year using the Anonymous Gregorian algorithm.
`Gauss Algorithm for Easter Sunday` was developed by the mathematician Carl Friedrich Gauss
This algorithm is based on the cycles of the moon and the fact that Easter always falls on the first Sunday after the first ecclesiastical full moon that occurs on or after March 21.
While it's not considered to be 100% accurate due to rare exceptions, it does give the correct date in most cases.
It's important to note that Gauss's formula has been found to be inaccurate for some 21st-century years in the Gregorian calendar. Specifically, the next suggested failure years are 2038, 2051.
This function can be used for Good Friday (Friday before Easter), Easter Sunday, and Easter Monday (following Monday).
en.wikipedia.org
Parameters:
_year (int) : `int` - The year for which to calculate the date of Easter Sunday. This should be a four-digit year (YYYY).
Returns: tuple - The month (1-12) and day (1-31) of Easter Sunday for the given year.
easterInit()
Inits the date of Easter Sunday and Good Friday for a given year.
Returns: tuple - The month (1-12) and day (1-31) of Easter Sunday and Good Friday for the given year.
isLeapYear(_year)
Determine if a year is a leap year.
Parameters:
_year (int) : `int` - 4 digit year to check => YYYY
Returns: `bool` - true if input year is a leap year
method timezoneHelper(utc)
Helper function to convert UTC time.
Namespace types: series int, simple int, input int, const int
Parameters:
utc (int) : `int` - UTC time shift in hours.
Returns: `string`- UTC time string with shift applied.
weekofmonth()
Function to find the week of the month of a given Unix Time.
Returns: number - The week of the month of the specified UTC time.
dayLightSavingsAdjustedUTC(utc, adjustForDLS)
dayLightSavingsAdjustedUTC
Parameters:
utc (int) : `int` - The normal UTC timestamp to be used for reference.
adjustForDLS (bool) : `bool` - Flag indicating whether to adjust for daylight savings time (DLS).
Returns: `int` - The adjusted UTC timestamp for the given normal UTC timestamp.
getDayOfYear(monthOfYear, dayOfMonth, weekOfMonth, dayOfWeek, lastOccurrenceInMonth, holiday)
Function gets the day of the year of a given holiday (1-366)
Parameters:
monthOfYear (int)
dayOfMonth (int)
weekOfMonth (int)
dayOfWeek (int)
lastOccurrenceInMonth (bool)
holiday (string)
Returns: `int` - The day of the year of the holiday 1-366.
method buildMap(holidayMap, holiday, monthOfYear, weekOfMonth, dayOfWeek, dayOfMonth, lastOccurrenceInMonth, closingTime)
Function to build the `holidaysMap`.
Namespace types: map
Parameters:
holidayMap (map) : `map` - The map of holidays.
holiday (string) : `string` - The name of the holiday.
monthOfYear (int) : `int` - The month of the year of the holiday.
weekOfMonth (int) : `int` - The week of the month of the holiday.
dayOfWeek (int) : `int` - The day of the week of the holiday.
dayOfMonth (int) : `int` - The day of the month of the holiday.
lastOccurrenceInMonth (bool) : `bool` - Flag indicating whether the holiday is the last occurrence of the day in the month.
closingTime (int) : `int` - The closing time of the holiday.
Returns: `map` - The updated map of holidays
holidayInit(addHolidaysArray, removeHolidaysArray, defaultHolidays)
Initializes a HolidayStorage object with predefined US holidays.
Parameters:
addHolidaysArray (array) : `array` - The array of additional holidays to be added.
removeHolidaysArray (array) : `array` - The array of holidays to be removed.
defaultHolidays (bool) : `bool` - Flag indicating whether to include the default holidays.
Returns: `map` - The map of holidays.
Holidays(utc, addHolidaysArray, removeHolidaysArray, adjustForDLS, displayLabel, defaultHolidays)
Main function to build the holidays object, this is the only function from this library that should be needed. \
all functionality should be available through this function. \
With the exception of initializing a `HolidayMetaData` object to add a holiday or early close. \
\
**Default Holidays:** \
`DLS begin`, `DLS end`, `New Year's Day`, `MLK Jr. Day`, \
`Washington Day`, `Memorial Day`, `Independence Day`, `Labor Day`, \
`Columbus Day`, `Veterans Day`, `Thanksgiving Day`, `Christmas Day` \
\
**Example**
```
HolidayMetaData valentinesDay = HolidayMetaData.new(holiday="Valentine's Day", monthOfYear=2, dayOfMonth=14)
HolidayMetaData stPatricksDay = HolidayMetaData.new(holiday="St. Patrick's Day", monthOfYear=3, dayOfMonth=17)
HolidayMetaData addHolidaysArray = array.from(valentinesDay, stPatricksDay)
string removeHolidaysArray = array.from("DLS begin", "DLS end")
܂Holidays = Holidays(
܂ utc=-6,
܂ addHolidaysArray=addHolidaysArray,
܂ removeHolidaysArray=removeHolidaysArray,
܂ adjustForDLS=true,
܂ displayLabel=true,
܂ defaultHolidays=true,
܂ )
plot(Holidays.newHoliday ? open : na, title="newHoliday", color=color.red, linewidth=4, style=plot.style_circles)
```
Parameters:
utc (int) : `int` - The UTC time shift in hours
addHolidaysArray (array) : `array` - The array of additional holidays to be added
removeHolidaysArray (array) : `array` - The array of holidays to be removed
adjustForDLS (bool) : `bool` - Flag indicating whether to adjust for daylight savings time (DLS)
displayLabel (bool) : `bool` - Flag indicating whether to display a label on the chart
defaultHolidays (bool) : `bool` - Flag indicating whether to include the default holidays
Returns: `HolidayObject` - The holidays object | Holidays = (holidaysMap: map, newHoliday: bool, holiday: string, dayString: string)
HolidayMetaData
HolidayMetaData
Fields:
holiday (series string) : `string` - The name of the holiday.
dayOfYear (series int) : `int` - The day of the year of the holiday.
monthOfYear (series int) : `int` - The month of the year of the holiday.
dayOfMonth (series int) : `int` - The day of the month of the holiday.
weekOfMonth (series int) : `int` - The week of the month of the holiday.
dayOfWeek (series int) : `int` - The day of the week of the holiday.
lastOccurrenceInMonth (series bool)
closingTime (series int) : `int` - The closing time of the holiday.
utc (series int) : `int` - The UTC time shift in hours.
HolidayObject
HolidayObject
Fields:
holidaysMap (map) : `map` - The map of holidays.
newHoliday (series bool) : `bool` - Flag indicating whether today is a new holiday.
activeHoliday (series bool) : `bool` - Flag indicating whether today is an active holiday.
holiday (series string) : `string` - The name of the holiday.
dayString (series string) : `string` - The day of the week of the holiday.
Grid by Volatility (Expo)█ Overview
The Grid by Volatility is designed to provide a dynamic grid overlay on your price chart. This grid is calculated based on the volatility and adjusts in real-time as market conditions change. The indicator uses Standard Deviation to determine volatility and is useful for traders looking to understand price volatility patterns, determine potential support and resistance levels, or validate other trading signals.
█ How It Works
The indicator initiates its computations by assessing the market volatility through an established statistical model: the Standard Deviation. Following the volatility determination, the algorithm calculates a central equilibrium line—commonly referred to as the "mid-line"—on the chart to serve as a baseline for additional computations. Subsequently, upper and lower grid lines are algorithmically generated and plotted equidistantly from the central mid-line, with the distance being dictated by the previously calculated volatility metrics.
█ How to Use
Trend Analysis: The grid can be used to analyze the underlying trend of the asset. For example, if the price is above the Average Line and moves toward the Upper Range, it indicates a strong bullish trend.
Support and Resistance: The grid lines can act as dynamic support and resistance levels. Price tends to bounce off these levels or breakthrough, providing potential trade opportunities.
Volatility Gauge: The distance between the grid lines serves as a measure of market volatility. Wider lines indicate higher volatility, while narrower lines suggest low volatility.
█ Settings
Volatility Length: Number of bars to calculate the Standard Deviation (Default: 200)
Squeeze Adjustment: Multiplier for the Standard Deviation (Default: 6)
Grid Confirmation Length: Number of bars to calculate the weighted moving average for smoothing the grid lines (Default: 2)
-----------------
Disclaimer
The information contained in my Scripts/Indicators/Ideas/Algos/Systems does not constitute financial advice or a solicitation to buy or sell any securities of any type. I will not accept liability for any loss or damage, including without limitation any loss of profit, which may arise directly or indirectly from the use of or reliance on such information.
All investments involve risk, and the past performance of a security, industry, sector, market, financial product, trading strategy, backtest, or individual's trading does not guarantee future results or returns. Investors are fully responsible for any investment decisions they make. Such decisions should be based solely on an evaluation of their financial circumstances, investment objectives, risk tolerance, and liquidity needs.
My Scripts/Indicators/Ideas/Algos/Systems are only for educational purposes!
Channel Based Zigzag [HeWhoMustNotBeNamed]🎲 Concept
Zigzag is built based on the price and number of offset bars. But, in this experiment, we build zigzag based on different bands such as Bollinger Band, Keltner Channel and Donchian Channel. The process is simple:
🎯 Derive bands based on input parameters
🎯 High of a bar is considered as pivot high only if the high price is above or equal to upper band.
🎯 Similarly low of a bar is considered as pivot low only if low price is below or equal to lower band.
🎯 Adding the pivot high/low follows same logic as that of regular zigzag where pivot high is always followed by pivot low and vice versa.
🎯 If the new pivot added is of same direction as that of last pivot, then both pivots are compared with each other and only the extreme one is kept. (Highest in case of pivot high and lowest in case of pivot low)
🎯 If a bar has both pivot high and pivot low - pivot with same direction as previous pivot is added to the list first before adding the pivot with opposite direction.
🎲 Use Cases
Can be used for pattern recognition algorithms instead of standard zigzag. This will help derive patterns which are relative to bands and channels.
Example: John Bollinger explains how to manually scan double tap using Bollinger Bands in this video: www.youtube.com This modified zigzag base can be used to achieve the same using algorithmic means.
🎲 Settings
Few simple configurations which will let you select the band properties. Notice that there is no zigzag length here. All the calculations depend on the bands.
With bands display, indicator looks something like this
Note that pivots do not always represent highest/lowest prices. They represent highest/lowest price relative to bands.
As mentioned many times, application of zigzag is not for buying at lower price and selling at higher price. It is mainly used for pattern recognition either manually or via algorithms. Lets build new Harmonic, Chart patterns, Trend Lines using the new zigzag?
Machine Learning: kNN (New Approach)Description:
kNN is a very robust and simple method for data classification and prediction. It is very effective if the training data is large. However, it is distinguished by difficulty at determining its main parameter, K (a number of nearest neighbors), beforehand. The computation cost is also quite high because we need to compute distance of each instance to all training samples. Nevertheless, in algorithmic trading KNN is reported to perform on a par with such techniques as SVM and Random Forest. It is also widely used in the area of data science.
The input data is just a long series of prices over time without any particular features. The value to be predicted is just the next bar's price. The way that this problem is solved for both nearest neighbor techniques and for some other types of prediction algorithms is to create training records by taking, for instance, 10 consecutive prices and using the first 9 as predictor values and the 10th as the prediction value. Doing this way, given 100 data points in your time series you could create 10 different training records. It's possible to create even more training records than 10 by creating a new record starting at every data point. For instance, you could take the first 10 data points and create a record. Then you could take the 10 consecutive data points starting at the second data point, the 10 consecutive data points starting at the third data point, etc.
By default, shown are only 10 initial data points as predictor values and the 6th as the prediction value.
Here is a step-by-step workthrough on how to compute K nearest neighbors (KNN) algorithm for quantitative data:
1. Determine parameter K = number of nearest neighbors.
2. Calculate the distance between the instance and all the training samples. As we are dealing with one-dimensional distance, we simply take absolute value from the instance to value of x (| x – v |).
3. Rank the distance and determine nearest neighbors based on the K'th minimum distance.
4. Gather the values of the nearest neighbors.
5. Use average of nearest neighbors as the prediction value of the instance.
The original logic of the algorithm was slightly modified, and as a result at approx. N=17 the resulting curve nicely approximates that of the sma(20). See the description below. Beside the sma-like MA this algorithm also gives you a hint on the direction of the next bar move.
LinearRegressionLibraryLibrary "LinearRegressionLibrary" contains functions for fitting a regression line to the time series by means of different models, as well as functions for estimating the accuracy of the fit.
Linear regression algorithms:
RepeatedMedian(y, n, lastBar) applies repeated median regression (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
mSlope :: float, slope of the regression line
mInter :: float, intercept of the regression line
TheilSen(y, n, lastBar) applies the Theil-Sen estimator (robust linear regression algorithm) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
tsSlope :: float, slope of the regression line
tsInter :: float, intercept of the regression line
OrdinaryLeastSquares(y, n, lastBar) applies the ordinary least squares regression (non-robust) to the input time series within the selected interval.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
Output:
olsSlope :: float, slope of the regression line
olsInter :: float, intercept of the regression line
Model performance metrics:
metricRMSE(y, n, lastBar, slope, intercept) returns the Root-Mean-Square Error (RMSE) of the regression. The better the model, the lower the RMSE.
Parameters:
y :: float series, source time series (e.g. close)
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
rmse :: float, RMSE value
metricMAE(y, n, lastBar, slope, intercept) returns the Mean Absolute Error (MAE) of the regression. MAE is is similar to RMSE but is less sensitive to outliers. The better the model, the lower the MAE.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
mae :: float, MAE value
metricR2(y, n, lastBar, slope, intercept) returns the coefficient of determination (R squared) of the regression. The better the linear regression fits the data (compared to the sample mean), the closer the value of the R squared is to 1.
Parameters:
y :: float series, source time series
n :: integer, the length of the selected time interval
lastBar :: integer, index of the last bar of the selected time interval (defines the position of the interval)
slope :: float, slope of the evaluated linear regression line
intercept :: float, intercept of the evaluated linear regression line
Output:
Rsq :: float, R-sqared score
Usage example:
//@version=5
indicator('ExampleLinReg', overlay=true)
// import the library
import tbiktag/LinearRegressionLibrary/1 as linreg
// define the studied interval: last 100 bars
int Npoints = 100
int lastBar = bar_index
int firstBar = bar_index - Npoints
// apply repeated median regression to the closing price time series within the specified interval
{square bracket}slope, intercept{square bracket} = linreg.RepeatedMedian(close, Npoints, lastBar)
// calculate the root-mean-square error of the obtained linear fit
rmse = linreg.metricRMSE(close, Npoints, lastBar, slope, intercept)
// plot the line and print the RMSE value
float y1 = intercept
float y2 = intercept + slope * (Npoints - 1)
if barstate.islast
{indent} line.new(firstBar,y1, lastBar,y2)
{indent} label.new(lastBar,y2,text='RMSE = '+str.format("{0,number,#.#}", rmse))
Pulu's 3 Moving Averages
Pulu's 3 Moving Averages
Release version 1, date 2021-09-28
This script allows you to customize three sets of moving averages, turn on/off, set color and parameters. It also tags the start date of the last set of moving average if there is. This, release version 1, supports eight moving average algorithms:
ALMA, Arnaud Legoux Moving Average
EMA, Exponential Moving Average
RMA, Adjusted exponential moving average (aka Wilder’s EMA)
SMA, Simple Moving Average
SWMA, Symmetrically-Weighted Moving Average
VWAP, Volume-Weighted Average Price
VWMA, Volume-Weighted Moving Average
WMA, Weighted Moving Average
The availability and function parameters
Func. Availability Parameters
ALMA
MA1, MA2, MA3
source
length
offset
sigma
EMA
RMA
SMA
VWMA
WMA
MA1, MA2, MA3
source
length
SWMA
VWAP
MA1
source
Parameters
Parameter Description
source the series of values to process. The default is to use the closing price to calculate the moving average.
length an integer value that defines the number of bars to calculate the moving average on. The SWMA and VWAP do not use this parameter.
ALMA offset a floating-point value that controls the tradeoff between smoothness (with a value closer to 1) and responsiveness (with a value closer to 0). This parameter is only used by ALMA.
ALMA sigma a floating-point value that specifies the ALMA’s smoothness. The larger this value, the smoother the moving average is. This parameter is only used by ALMA.
I'm not sure if it is needed, so I do not let the three Moving Averages of the script to have indivial algorithm setting. Because that will involve much complicated condition testing and use up more TradingView script lines limit. If you need to combine different algorithms in the three sets of moving averages, or have other ideas, leave a message to let me know; maybe I will try it in the next update.
我不確定是否需要,所以我沒有讓腳本的三組移動平均線有各別的算法設置。因為這將涉及更多複雜的條件測試,並使用更多 TradingView 腳本列數限制。如果您需要在三組均線中組合不同的算法,或者有其他想法,請留言告訴我;也許我會在下一次更新中嘗試。
Cycles StrategyThis is back-testable strategy is a modified version of the Stochastic strategy. It is meant to accompany the modified Stochastic indicator: "Cycles".
Modifications to the Stochastic strategy include;
1. Programmable settings for the Stochastic Periods (%K, %D and Smooth %K).
2. Programmable settings for the MACD Periods (Fast, Slow, Smoothing)
3. Programmable thresholds for %K, to qualify a potential entry strategy.
4. Programmable thresholds for %D, to qualify a potential exit strategy.
5. Buttons to choose which components to use in the trading algorithm.
6. Choose the month and year to back test.
The trading algorithm:
1. When %K exceeds the upper/lower threshold and then hooks down/up, in the direction of the Moving Average (MA). This is the minimum entry qualification.
2. When %D exceeds the lower/upper threshold and angled in the direction of the trade, is the exit qualification.
3. Additional entry filters include the direction of MACD, Signal and %D. Also, the "cliff", being a long entry is a higher high or a short entry is a lower low.
4. Strategy can only go "Long" or "Short" depending on the selected setting.
5. By matching the settings in the "Cycles" indicator, you can (almost) see what the strategy is doing.
6. Be sure to select the "Recalculate" buttons, to recalculate on every new Tick, for best results.
Please click the Like button and leave a comment if you appreciate this script. Improvements will be implemented as time goes on.
I am not a licensed trade advisor. This strategy is for entertainment only. Use at your own risk!
Intellect_city - Halvings Bitcoin CycleWhat is halving?
The halving timer shows when the next Bitcoin halving will occur, as well as the dates of past halvings. This event occurs every 210,000 blocks, which is approximately every 4 years. Halving reduces the emission reward by half. The original Bitcoin reward was 50 BTC per block found.
Why is halving necessary?
Halving allows you to maintain an algorithmically specified emission level. Anyone can verify that no more than 21 million bitcoins can be issued using this algorithm. Moreover, everyone can see how much was issued earlier, at what speed the emission is happening now, and how many bitcoins remain to be mined in the future. Even a sharp increase or decrease in mining capacity will not significantly affect this process. In this case, during the next difficulty recalculation, which occurs every 2014 blocks, the mining difficulty will be recalculated so that blocks are still found approximately once every ten minutes.
How does halving work in Bitcoin blocks?
The miner who collects the block adds a so-called coinbase transaction. This transaction has no entry, only exit with the receipt of emission coins to your address. If the miner's block wins, then the entire network will consider these coins to have been obtained through legitimate means. The maximum reward size is determined by the algorithm; the miner can specify the maximum reward size for the current period or less. If he puts the reward higher than possible, the network will reject such a block and the miner will not receive anything. After each halving, miners have to halve the reward they assign to themselves, otherwise their blocks will be rejected and will not make it to the main branch of the blockchain.
The impact of halving on the price of Bitcoin
It is believed that with constant demand, a halving of supply should double the value of the asset. In practice, the market knows when the halving will occur and prepares for this event in advance. Typically, the Bitcoin rate begins to rise about six months before the halving, and during the halving itself it does not change much. On average for past periods, the upper peak of the rate can be observed more than a year after the halving. It is almost impossible to predict future periods because, in addition to the reduction in emissions, many other factors influence the exchange rate. For example, major hacks or bankruptcies of crypto companies, the situation on the stock market, manipulation of “whales,” or changes in legislative regulation.
---------------------------------------------
Table - Past and future Bitcoin halvings:
---------------------------------------------
Date: Number of blocks: Award:
0 - 03-01-2009 - 0 block - 50 BTC
1 - 28-11-2012 - 210000 block - 25 BTC
2 - 09-07-2016 - 420000 block - 12.5 BTC
3 - 11-05-2020 - 630000 block - 6.25 BTC
4 - 20-04-2024 - 840000 block - 3.125 BTC
5 - 24-03-2028 - 1050000 block - 1.5625 BTC
6 - 26-02-2032 - 1260000 block - 0.78125 BTC
7 - 30-01-2036 - 1470000 block - 0.390625 BTC
8 - 03-01-2040 - 1680000 block - 0.1953125 BTC
9 - 07-12-2043 - 1890000 block - 0.09765625 BTC
10 - 10-11-2047 - 2100000 block - 0.04882813 BTC
11 - 14-10-2051 - 2310000 block - 0.02441406 BTC
12 - 17-09-2055 - 2520000 block - 0.01220703 BTC
13 - 21-08-2059 - 2730000 block - 0.00610352 BTC
14 - 25-07-2063 - 2940000 block - 0.00305176 BTC
15 - 28-06-2067 - 3150000 block - 0.00152588 BTC
16 - 01-06-2071 - 3360000 block - 0.00076294 BTC
17 - 05-05-2075 - 3570000 block - 0.00038147 BTC
18 - 08-04-2079 - 3780000 block - 0.00019073 BTC
19 - 12-03-2083 - 3990000 block - 0.00009537 BTC
20 - 13-02-2087 - 4200000 block - 0.00004768 BTC
21 - 17-01-2091 - 4410000 block - 0.00002384 BTC
22 - 21-12-2094 - 4620000 block - 0.00001192 BTC
23 - 24-11-2098 - 4830000 block - 0.00000596 BTC
24 - 29-10-2102 - 5040000 block - 0.00000298 BTC
25 - 02-10-2106 - 5250000 block - 0.00000149 BTC
26 - 05-09-2110 - 5460000 block - 0.00000075 BTC
27 - 09-08-2114 - 5670000 block - 0.00000037 BTC
28 - 13-07-2118 - 5880000 block - 0.00000019 BTC
29 - 16-06-2122 - 6090000 block - 0.00000009 BTC
30 - 20-05-2126 - 6300000 block - 0.00000005 BTC
31 - 23-04-2130 - 6510000 block - 0.00000002 BTC
32 - 27-03-2134 - 6720000 block - 0.00000001 BTC