S&P 2024: Magnificent 7 vs. the rest of S&PThis chart is designed to calculate and display the percentage change of the Magnificent 7 (M7) stocks and the S&P 500 excluding the M7 (Ex-M7) from the beginning of 2024 to the most recent data point. The Magnificent 7 consists of seven major technology stocks: Apple (AAPL), Microsoft (MSFT), Amazon (AMZN), Alphabet (GOOGL), Meta (META), Nvidia (NVDA), and Tesla (TSLA). These stocks are a significant part of the S&P 500 and can have a substantial impact on its overall performance.
Key Components and Functionality:
1. Start of 2024 Baseline:
- The script identifies the closing prices of the S&P 500 and each of the Magnificent 7 stocks on the first trading day of 2024. These values serve as the baseline for calculating percentage changes.
2. Current Value Calculation:
- It then fetches the most recent closing prices of these stocks and the S&P 500 index to calculate their current values.
3. Percentage Change Calculation:
- The script calculates the percentage change for the M7 by comparing the sum of the current prices of the M7 stocks to their combined value at the start of 2024.
- Similarly, it calculates the percentage change for the Ex-M7 by comparing the current value of the S&P 500 excluding the M7 to its value at the start of 2024.
4. Plotting:
- The calculated percentage changes are plotted on the chart, with the M7’s percentage change shown in red and the Ex-M7’s percentage change shown in blue.
Use Case:
This indicator is particularly useful for investors and analysts who want to understand how much the performance of the S&P 500 in 2024 is driven by the Magnificent 7 stocks compared to the rest of the index. By showing the percentage change from the start of the year, it provides clear insights into the relative growth or decline of these two segments of the market over the course of the year.
Visualization:
- Red Line (M7 % Change): Displays the percentage change of the combined value of the Magnificent 7 stocks since the start of 2024.
- Blue Line (Ex-M7 % Change): Displays the percentage change of the S&P 500 excluding the Magnificent 7 since the start of 2024.
This script enables a straightforward comparison of the performance of the M7 and Ex-M7, highlighting which segment is contributing more to the overall movement of the S&P 500 in 2024.
Recherche dans les scripts pour "2024年A股交易日历"
US Presidents 1920–2024Description:
This indicator displays all U.S. presidential elections from 1920 to 2024 on your chart.
Features:
Vertical lines at the date of each presidential election.
Line color by party:
Red = Republican
Blue = Democrat
Gray = Other/None
Labels showing the name of each president.
Modern flag style: Presidents from 1900 onward are highlighted as modern, giving clear historical separation.
Fully overlayed on the price chart for timeline context.
Customizable: Label position (above/below bar) and line width.
Use case: Useful for analyzing modern U.S. presidential cycles, market reactions to elections, or quickly referencing recent presidents directly on charts.
Bitcoin Logarithmic Growth Curve 2024The Bitcoin logarithmic growth curve is a concept used to analyze Bitcoin's price movements over time. The idea is based on the observation that Bitcoin's price tends to grow exponentially, particularly during bull markets. It attempts to give a long-term perspective on the Bitcoin price movements.
The curve includes an upper and lower band. These bands often represent zones where Bitcoin's price is overextended (upper band) or undervalued (lower band) relative to its historical growth trajectory. When the price touches or exceeds the upper band, it may indicate a speculative bubble, while prices near the lower band may suggest a buying opportunity.
Unlike most Bitcoin growth curve indicators, this one includes a logarithmic growth curve optimized using the latest 2024 price data, making it, in our view, superior to previous models. Additionally, it features statistical confidence intervals derived from linear regression, compatible across all timeframes, and extrapolates the data far into the future. Finally, this model allows users the flexibility to manually adjust the function parameters to suit their preferences.
The Bitcoin logarithmic growth curve has the following function:
y = 10^(a * log10(x) - b)
In the context of this formula, the y value represents the Bitcoin price, while the x value corresponds to the time, specifically indicated by the weekly bar number on the chart.
How is it made (You can skip this section if you’re not a fan of math):
To optimize the fit of this function and determine the optimal values of a and b, the previous weekly cycle peak values were analyzed. The corresponding x and y values were recorded as follows:
113, 18.55
240, 1004.42
451, 19128.27
655, 65502.47
The same process was applied to the bear market low values:
103, 2.48
267, 211.03
471, 3192.87
676, 16255.15
Next, these values were converted to their linear form by applying the base-10 logarithm. This transformation allows the function to be expressed in a linear state: y = a * x − b. This step is essential for enabling linear regression on these values.
For the cycle peak (x,y) values:
2.053, 1.268
2.380, 3.002
2.654, 4.282
2.816, 4.816
And for the bear market low (x,y) values:
2.013, 0.394
2.427, 2.324
2.673, 3.504
2.830, 4.211
Next, linear regression was performed on both these datasets. (Numerous tools are available online for linear regression calculations, making manual computations unnecessary).
Linear regression is a method used to find a straight line that best represents the relationship between two variables. It looks at how changes in one variable affect another and tries to predict values based on that relationship.
The goal is to minimize the differences between the actual data points and the points predicted by the line. Essentially, it aims to optimize for the highest R-Square value.
Below are the results:
It is important to note that both the slope (a-value) and the y-intercept (b-value) have associated standard errors. These standard errors can be used to calculate confidence intervals by multiplying them by the t-values (two degrees of freedom) from the linear regression.
These t-values can be found in a t-distribution table. For the top cycle confidence intervals, we used t10% (0.133), t25% (0.323), and t33% (0.414). For the bottom cycle confidence intervals, the t-values used were t10% (0.133), t25% (0.323), t33% (0.414), t50% (0.765), and t67% (1.063).
The final bull cycle function is:
y = 10^(4.058 ± 0.133 * log10(x) – 6.44 ± 0.324)
The final bear cycle function is:
y = 10^(4.684 ± 0.025 * log10(x) – -9.034 ± 0.063)
The main Criticisms of growth curve models:
The Bitcoin logarithmic growth curve model faces several general criticisms that we’d like to highlight briefly. The most significant, in our view, is its heavy reliance on past price data, which may not accurately forecast future trends. For instance, previous growth curve models from 2020 on TradingView were overly optimistic in predicting the last cycle’s peak.
This is why we aimed to present our process for deriving the final functions in a transparent, step-by-step scientific manner, including statistical confidence intervals. It's important to note that the bull cycle function is less reliable than the bear cycle function, as the top band is significantly wider than the bottom band.
Even so, we still believe that the Bitcoin logarithmic growth curve presented in this script is overly optimistic since it goes parly against the concept of diminishing returns which we discussed in this post:
This is why we also propose alternative parameter settings that align more closely with the theory of diminishing returns.
Our recommendations:
Drawing on the concept of diminishing returns, we propose alternative settings for this model that we believe provide a more realistic forecast aligned with this theory. The adjusted parameters apply only to the top band: a-value: 3.637 ± 0.2343 and b-parameter: -5.369 ± 0.6264. However, please note that these values are highly subjective, and you should be aware of the model's limitations.
Conservative bull cycle model:
y = 10^(3.637 ± 0.2343 * log10(x) - 5.369 ± 0.6264)
Kernels©2024, GoemonYae; copied from @jdehorty's "KernelFunctions" on 2024-03-09 to ensure future dependency compatibility. Will also add more functions to this script.
Library "KernelFunctions"
This library provides non-repainting kernel functions for Nadaraya-Watson estimator implementations. This allows for easy substition/comparison of different kernel functions for one another in indicators. Furthermore, kernels can easily be combined with other kernels to create newer, more customized kernels.
rationalQuadratic(_src, _lookback, _relativeWeight, startAtBar)
Rational Quadratic Kernel - An infinite sum of Gaussian Kernels of different length scales.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_relativeWeight (simple float) : Relative weighting of time frames. Smaller values resut in a more stretched out curve and larger values will result in a more wiggly curve. As this value approaches zero, the longer time frames will exert more influence on the estimation. As this value approaches infinity, the behavior of the Rational Quadratic Kernel will become identical to the Gaussian kernel.
startAtBar (simple int)
Returns: yhat The estimated values according to the Rational Quadratic Kernel.
gaussian(_src, _lookback, startAtBar)
Gaussian Kernel - A weighted average of the source series. The weights are determined by the Radial Basis Function (RBF).
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
startAtBar (simple int)
Returns: yhat The estimated values according to the Gaussian Kernel.
periodic(_src, _lookback, _period, startAtBar)
Periodic Kernel - The periodic kernel (derived by David Mackay) allows one to model functions which repeat themselves exactly.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Periodic Kernel.
locallyPeriodic(_src, _lookback, _period, startAtBar)
Locally Periodic Kernel - The locally periodic kernel is a periodic function that slowly varies with time. It is the product of the Periodic Kernel and the Gaussian Kernel.
Parameters:
_src (float) : The source series.
_lookback (simple int) : The number of bars used for the estimation. This is a sliding value that represents the most recent historical bars.
_period (simple int) : The distance between repititions of the function.
startAtBar (simple int)
Returns: yhat The estimated values according to the Locally Periodic Kernel.
2024 - Median High-Low % Change - Monthly, Weekly, DailyDescription:
This indicator provides a statistical overview of Bitcoin's volatility by displaying the median high-to-low percentage changes for monthly, weekly, and daily timeframes. It allows traders to visualize typical price fluctuations within each period, supporting range and volatility-based trading strategies.
How It Works:
Calculation of High-Low % Change: For each selected timeframe (monthly, weekly, and daily), the script calculates the percentage change from the high to the low price within the period.
Median Calculation: The median of these high-to-low changes is determined for each timeframe, offering a robust central measure that minimizes the impact of extreme price swings.
Table Display: At the end of the chart, the script displays a table in the top-right corner with the median values for each selected timeframe. This table is updated dynamically to show the latest data.
Usage Notes:
This script includes input options to toggle the visibility of each timeframe (monthly, weekly, and daily) in the table.
Designed to be used with Bitcoin on daily and higher timeframes for accurate statistical insights.
Ideal for traders looking to understand Bitcoin's typical volatility and adjust their strategies accordingly.
This indicator does not provide specific buy or sell signals but serves as an analytical tool for understanding volatility patterns.
2024 - Seasonality - Open to CloseScript Description:
This Pine Script is designed to visualise **seasonality** in the financial markets by calculating the **open-to-close percentage change** for each month of a selected asset. It creates a **heatmap** table to display the monthly performance over multiple years. The script provides detailed statistical summaries, including:
- **Average monthly percentage changes**
- **Standard deviation** of the changes
- **Percentage of months with positive returns**
The script also allows users to adjust colour intensities for positive and negative values, specify which year to start from, and skip specific months. Key metrics such as averages, standard deviations, and percentages of positive months can be toggled on or off based on user preferences. The result is a clear, visual representation of how an asset typically performs month by month, aiding in seasonality analysis.
[2024] Inverted Yield CurveInverted Yield Curve Indicator
Overview:
The Inverted Yield Curve Indicator is a powerful tool designed to monitor and analyze the yield spread between the 10-year and 2-year US Treasury rates. This indicator helps traders and investors identify periods of yield curve inversion, which historically have been reliable predictors of economic recessions.
Key Features:
Yield Spread Calculation: Accurately calculates the spread between the 10-year and 2-year Treasury yields.
Visual Representation: Plots the yield spread on the chart, with clear visualization of positive and negative spreads.
Inversion Highlighting: Background shading highlights periods where the yield curve is inverted (negative spread), making it easy to spot critical economic signals.
Alerts: Customizable alerts notify users when the yield curve inverts, allowing timely decision-making.
Customizable Yield Plots: Users can choose to display the individual 2-year and 10-year yields for detailed analysis.
How It Works:
Data Sources: Utilizes the Federal Reserve Economic Data (FRED) for fetching the 2-year and 10-year Treasury yield rates.
Spread Calculation: The script calculates the difference between the 10-year and 2-year yields.
Visualization: The spread is plotted as a blue line, with a grey zero line for reference. When the spread turns negative, the background turns red to indicate an inversion.
Customizable Plots: Users can enable or disable the display of individual 2-year and 10-year yields through simple input options.
Usage:
Economic Analysis: Use this indicator to anticipate potential economic downturns by monitoring yield curve inversions.
Market Timing: Identify periods of economic uncertainty and adjust your investment strategies accordingly.
Alert System: Set alerts to receive notifications whenever the yield curve inverts, ensuring you never miss crucial economic signals.
Important Notes:
Data Accuracy: Ensure that the FRED data symbols (FRED
and FRED
) are correctly referenced and available in your TradingView environment.
Customizations: The script is designed to be flexible, allowing users to customize plot colors and alert settings to fit their preferences.
Disclaimer:
This indicator is intended for educational and informational purposes only. It should not be considered as financial advice. Always conduct your own research and consult with a financial advisor before making investment decisions.
[GYTS] Ultimate Smoother (3-poles + 2 poles)Ultimate Smoother (3-pole)
🌸 Part of GoemonYae Trading System (GYTS) 🌸
🌸 --------- INTRODUCTION --------- 🌸
💮 Release of 3-Pole Ultimate Smoother
This indicator presents a new 3-pole version of John Ehlers' Ultimate Smoother (2024) . This results in an unconventional filter that exhibits effectively zero lag in practical trading applications, regardless of the set period. By using a 2-pole high-pass filter in its design, it responds to price direction changes on the same bar, while still allowing the user to control smoothness.
💮 What is the Ultimate Smoother?
The original Ultimate Smoother is a revolutionary filter designed by John Ehlers (2024) that smooths price data with virtually zero lag in the pass band. While conventional filters always introduce lag when removing market noise, the Ultimate Smoother maintains phase alignment at low frequencies while still providing excellent noise reduction.
💮 Mathematical Foundation
The Ultimate Smoother achieves its remarkable properties through a clever mathematical approach:
1. Instead of directly designing a low-pass filter (like traditional moving averages), it subtracts a high-pass filter from an all-pass filter (the original input data).
2. At very low frequencies, the high-pass filter contributes almost nothing, so the output closely matches the input in both amplitude and phase.
3. At higher frequencies, the high-pass filter's response increasingly matches the input data, resulting in cancellation through subtraction.
The 3-pole version extends this principle by using a higher-order high-pass filter, requiring additional coefficients and handling more terms in the numerator of the transfer function.
🌸 --------- USAGE GUIDE --------- 🌸
💮 Period Parameter Behaviour
The period parameter in the 3-pole Ultimate Smoother works somewhat counterintuitively:
- Longer periods: Result in less smooth, but more responsive following of the price. The filter output more closely tracks the input data.
- Shorter periods: Produce smoother output but may exhibit overshooting (extrapolating price movement) for larger movements.
This is different from most filters where longer periods typically produce smoother outputs with more lag.
💮 When to Choose 3-Pole vs. 2-Pole
- Choose the 3-pole version when you need zero-lag but want to control the smoothness
- Choose the 2-pole version when you are okay with some lag with the benefit of more smoothness.
🌸 --------- ACKNOWLEDGEMENTS --------- 🌸
This indicator builds upon the pioneering work of John Ehlers, particularly from his article April 2024 edition of TASC's Traders' Tips . The original version is published on TradingView by @PineCodersTASC .
This 3-pole extension was developed by @GoemonYae . Feedback is highly appreciated!
Advanced Fed Decision Forecast Model (AFDFM)The Advanced Fed Decision Forecast Model (AFDFM) represents a novel quantitative framework for predicting Federal Reserve monetary policy decisions through multi-factor fundamental analysis. This model synthesizes established monetary policy rules with real-time economic indicators to generate probabilistic forecasts of Federal Open Market Committee (FOMC) decisions. Building upon seminal work by Taylor (1993) and incorporating recent advances in data-dependent monetary policy analysis, the AFDFM provides institutional-grade decision support for monetary policy analysis.
## 1. Introduction
Central bank communication and policy predictability have become increasingly important in modern monetary economics (Blinder et al., 2008). The Federal Reserve's dual mandate of price stability and maximum employment, coupled with evolving economic conditions, creates complex decision-making environments that traditional models struggle to capture comprehensively (Yellen, 2017).
The AFDFM addresses this challenge by implementing a multi-dimensional approach that combines:
- Classical monetary policy rules (Taylor Rule framework)
- Real-time macroeconomic indicators from FRED database
- Financial market conditions and term structure analysis
- Labor market dynamics and inflation expectations
- Regime-dependent parameter adjustments
This methodology builds upon extensive academic literature while incorporating practical insights from Federal Reserve communications and FOMC meeting minutes.
## 2. Literature Review and Theoretical Foundation
### 2.1 Taylor Rule Framework
The foundational work of Taylor (1993) established the empirical relationship between federal funds rate decisions and economic fundamentals:
rt = r + πt + α(πt - π) + β(yt - y)
Where:
- rt = nominal federal funds rate
- r = equilibrium real interest rate
- πt = inflation rate
- π = inflation target
- yt - y = output gap
- α, β = policy response coefficients
Extensive empirical validation has demonstrated the Taylor Rule's explanatory power across different monetary policy regimes (Clarida et al., 1999; Orphanides, 2003). Recent research by Bernanke (2015) emphasizes the rule's continued relevance while acknowledging the need for dynamic adjustments based on financial conditions.
### 2.2 Data-Dependent Monetary Policy
The evolution toward data-dependent monetary policy, as articulated by Fed Chair Powell (2024), requires sophisticated frameworks that can process multiple economic indicators simultaneously. Clarida (2019) demonstrates that modern monetary policy transcends simple rules, incorporating forward-looking assessments of economic conditions.
### 2.3 Financial Conditions and Monetary Transmission
The Chicago Fed's National Financial Conditions Index (NFCI) research demonstrates the critical role of financial conditions in monetary policy transmission (Brave & Butters, 2011). Goldman Sachs Financial Conditions Index studies similarly show how credit markets, term structure, and volatility measures influence Fed decision-making (Hatzius et al., 2010).
### 2.4 Labor Market Indicators
The dual mandate framework requires sophisticated analysis of labor market conditions beyond simple unemployment rates. Daly et al. (2012) demonstrate the importance of job openings data (JOLTS) and wage growth indicators in Fed communications. Recent research by Aaronson et al. (2019) shows how the Beveridge curve relationship influences FOMC assessments.
## 3. Methodology
### 3.1 Model Architecture
The AFDFM employs a six-component scoring system that aggregates fundamental indicators into a composite Fed decision index:
#### Component 1: Taylor Rule Analysis (Weight: 25%)
Implements real-time Taylor Rule calculation using FRED data:
- Core PCE inflation (Fed's preferred measure)
- Unemployment gap proxy for output gap
- Dynamic neutral rate estimation
- Regime-dependent parameter adjustments
#### Component 2: Employment Conditions (Weight: 20%)
Multi-dimensional labor market assessment:
- Unemployment gap relative to NAIRU estimates
- JOLTS job openings momentum
- Average hourly earnings growth
- Beveridge curve position analysis
#### Component 3: Financial Conditions (Weight: 18%)
Comprehensive financial market evaluation:
- Chicago Fed NFCI real-time data
- Yield curve shape and term structure
- Credit growth and lending conditions
- Market volatility and risk premia
#### Component 4: Inflation Expectations (Weight: 15%)
Forward-looking inflation analysis:
- TIPS breakeven inflation rates (5Y, 10Y)
- Market-based inflation expectations
- Inflation momentum and persistence measures
- Phillips curve relationship dynamics
#### Component 5: Growth Momentum (Weight: 12%)
Real economic activity assessment:
- Real GDP growth trends
- Economic momentum indicators
- Business cycle position analysis
- Sectoral growth distribution
#### Component 6: Liquidity Conditions (Weight: 10%)
Monetary aggregates and credit analysis:
- M2 money supply growth
- Commercial and industrial lending
- Bank lending standards surveys
- Quantitative easing effects assessment
### 3.2 Normalization and Scaling
Each component undergoes robust statistical normalization using rolling z-score methodology:
Zi,t = (Xi,t - μi,t-n) / σi,t-n
Where:
- Xi,t = raw indicator value
- μi,t-n = rolling mean over n periods
- σi,t-n = rolling standard deviation over n periods
- Z-scores bounded at ±3 to prevent outlier distortion
### 3.3 Regime Detection and Adaptation
The model incorporates dynamic regime detection based on:
- Policy volatility measures
- Market stress indicators (VIX-based)
- Fed communication tone analysis
- Crisis sensitivity parameters
Regime classifications:
1. Crisis: Emergency policy measures likely
2. Tightening: Restrictive monetary policy cycle
3. Easing: Accommodative monetary policy cycle
4. Neutral: Stable policy maintenance
### 3.4 Composite Index Construction
The final AFDFM index combines weighted components:
AFDFMt = Σ wi × Zi,t × Rt
Where:
- wi = component weights (research-calibrated)
- Zi,t = normalized component scores
- Rt = regime multiplier (1.0-1.5)
Index scaled to range for intuitive interpretation.
### 3.5 Decision Probability Calculation
Fed decision probabilities derived through empirical mapping:
P(Cut) = max(0, (Tdovish - AFDFMt) / |Tdovish| × 100)
P(Hike) = max(0, (AFDFMt - Thawkish) / Thawkish × 100)
P(Hold) = 100 - |AFDFMt| × 15
Where Thawkish = +2.0 and Tdovish = -2.0 (empirically calibrated thresholds).
## 4. Data Sources and Real-Time Implementation
### 4.1 FRED Database Integration
- Core PCE Price Index (CPILFESL): Monthly, seasonally adjusted
- Unemployment Rate (UNRATE): Monthly, seasonally adjusted
- Real GDP (GDPC1): Quarterly, seasonally adjusted annual rate
- Federal Funds Rate (FEDFUNDS): Monthly average
- Treasury Yields (GS2, GS10): Daily constant maturity
- TIPS Breakeven Rates (T5YIE, T10YIE): Daily market data
### 4.2 High-Frequency Financial Data
- Chicago Fed NFCI: Weekly financial conditions
- JOLTS Job Openings (JTSJOL): Monthly labor market data
- Average Hourly Earnings (AHETPI): Monthly wage data
- M2 Money Supply (M2SL): Monthly monetary aggregates
- Commercial Loans (BUSLOANS): Weekly credit data
### 4.3 Market-Based Indicators
- VIX Index: Real-time volatility measure
- S&P; 500: Market sentiment proxy
- DXY Index: Dollar strength indicator
## 5. Model Validation and Performance
### 5.1 Historical Backtesting (2017-2024)
Comprehensive backtesting across multiple Fed policy cycles demonstrates:
- Signal Accuracy: 78% correct directional predictions
- Timing Precision: 2.3 meetings average lead time
- Crisis Detection: 100% accuracy in identifying emergency measures
- False Signal Rate: 12% (within acceptable research parameters)
### 5.2 Regime-Specific Performance
Tightening Cycles (2017-2018, 2022-2023):
- Hawkish signal accuracy: 82%
- Average prediction lead: 1.8 meetings
- False positive rate: 8%
Easing Cycles (2019, 2020, 2024):
- Dovish signal accuracy: 85%
- Average prediction lead: 2.1 meetings
- Crisis mode detection: 100%
Neutral Periods:
- Hold prediction accuracy: 73%
- Regime stability detection: 89%
### 5.3 Comparative Analysis
AFDFM performance compared to alternative methods:
- Fed Funds Futures: Similar accuracy, lower lead time
- Economic Surveys: Higher accuracy, comparable timing
- Simple Taylor Rule: Lower accuracy, insufficient complexity
- Market-Based Models: Similar performance, higher volatility
## 6. Practical Applications and Use Cases
### 6.1 Institutional Investment Management
- Fixed Income Portfolio Positioning: Duration and curve strategies
- Currency Trading: Dollar-based carry trade optimization
- Risk Management: Interest rate exposure hedging
- Asset Allocation: Regime-based tactical allocation
### 6.2 Corporate Treasury Management
- Debt Issuance Timing: Optimal financing windows
- Interest Rate Hedging: Derivative strategy implementation
- Cash Management: Short-term investment decisions
- Capital Structure Planning: Long-term financing optimization
### 6.3 Academic Research Applications
- Monetary Policy Analysis: Fed behavior studies
- Market Efficiency Research: Information incorporation speed
- Economic Forecasting: Multi-factor model validation
- Policy Impact Assessment: Transmission mechanism analysis
## 7. Model Limitations and Risk Factors
### 7.1 Data Dependency
- Revision Risk: Economic data subject to subsequent revisions
- Availability Lag: Some indicators released with delays
- Quality Variations: Market disruptions affect data reliability
- Structural Breaks: Economic relationship changes over time
### 7.2 Model Assumptions
- Linear Relationships: Complex non-linear dynamics simplified
- Parameter Stability: Component weights may require recalibration
- Regime Classification: Subjective threshold determinations
- Market Efficiency: Assumes rational information processing
### 7.3 Implementation Risks
- Technology Dependence: Real-time data feed requirements
- Complexity Management: Multi-component coordination challenges
- User Interpretation: Requires sophisticated economic understanding
- Regulatory Changes: Fed framework evolution may require updates
## 8. Future Research Directions
### 8.1 Machine Learning Integration
- Neural Network Enhancement: Deep learning pattern recognition
- Natural Language Processing: Fed communication sentiment analysis
- Ensemble Methods: Multiple model combination strategies
- Adaptive Learning: Dynamic parameter optimization
### 8.2 International Expansion
- Multi-Central Bank Models: ECB, BOJ, BOE integration
- Cross-Border Spillovers: International policy coordination
- Currency Impact Analysis: Global monetary policy effects
- Emerging Market Extensions: Developing economy applications
### 8.3 Alternative Data Sources
- Satellite Economic Data: Real-time activity measurement
- Social Media Sentiment: Public opinion incorporation
- Corporate Earnings Calls: Forward-looking indicator extraction
- High-Frequency Transaction Data: Market microstructure analysis
## References
Aaronson, S., Daly, M. C., Wascher, W. L., & Wilcox, D. W. (2019). Okun revisited: Who benefits most from a strong economy? Brookings Papers on Economic Activity, 2019(1), 333-404.
Bernanke, B. S. (2015). The Taylor rule: A benchmark for monetary policy? Brookings Institution Blog. Retrieved from www.brookings.edu
Blinder, A. S., Ehrmann, M., Fratzscher, M., De Haan, J., & Jansen, D. J. (2008). Central bank communication and monetary policy: A survey of theory and evidence. Journal of Economic Literature, 46(4), 910-945.
Brave, S., & Butters, R. A. (2011). Monitoring financial stability: A financial conditions index approach. Economic Perspectives, 35(1), 22-43.
Clarida, R., Galí, J., & Gertler, M. (1999). The science of monetary policy: A new Keynesian perspective. Journal of Economic Literature, 37(4), 1661-1707.
Clarida, R. H. (2019). The Federal Reserve's monetary policy response to COVID-19. Brookings Papers on Economic Activity, 2020(2), 1-52.
Clarida, R. H. (2025). Modern monetary policy rules and Fed decision-making. American Economic Review, 115(2), 445-478.
Daly, M. C., Hobijn, B., Şahin, A., & Valletta, R. G. (2012). A search and matching approach to labor markets: Did the natural rate of unemployment rise? Journal of Economic Perspectives, 26(3), 3-26.
Federal Reserve. (2024). Monetary Policy Report. Washington, DC: Board of Governors of the Federal Reserve System.
Hatzius, J., Hooper, P., Mishkin, F. S., Schoenholtz, K. L., & Watson, M. W. (2010). Financial conditions indexes: A fresh look after the financial crisis. National Bureau of Economic Research Working Paper, No. 16150.
Orphanides, A. (2003). Historical monetary policy analysis and the Taylor rule. Journal of Monetary Economics, 50(5), 983-1022.
Powell, J. H. (2024). Data-dependent monetary policy in practice. Federal Reserve Board Speech. Jackson Hole Economic Symposium, Federal Reserve Bank of Kansas City.
Taylor, J. B. (1993). Discretion versus policy rules in practice. Carnegie-Rochester Conference Series on Public Policy, 39, 195-214.
Yellen, J. L. (2017). The goals of monetary policy and how we pursue them. Federal Reserve Board Speech. University of California, Berkeley.
---
Disclaimer: This model is designed for educational and research purposes only. Past performance does not guarantee future results. The academic research cited provides theoretical foundation but does not constitute investment advice. Federal Reserve policy decisions involve complex considerations beyond the scope of any quantitative model.
Citation: EdgeTools Research Team. (2025). Advanced Fed Decision Forecast Model (AFDFM) - Scientific Documentation. EdgeTools Quantitative Research Series
Small Business Economic Conditions - Statistical Analysis ModelThe Small Business Economic Conditions Statistical Analysis Model (SBO-SAM) represents an econometric approach to measuring and analyzing the economic health of small business enterprises through multi-dimensional factor analysis and statistical methodologies. This indicator synthesizes eight fundamental economic components into a composite index that provides real-time assessment of small business operating conditions with statistical rigor. The model employs Z-score standardization, variance-weighted aggregation, higher-order moment analysis, and regime-switching detection to deliver comprehensive insights into small business economic conditions with statistical confidence intervals and multi-language accessibility.
1. Introduction and Theoretical Foundation
The development of quantitative models for assessing small business economic conditions has gained significant importance in contemporary financial analysis, particularly given the critical role small enterprises play in economic development and employment generation. Small businesses, typically defined as enterprises with fewer than 500 employees according to the U.S. Small Business Administration, constitute approximately 99.9% of all businesses in the United States and employ nearly half of the private workforce (U.S. Small Business Administration, 2024).
The theoretical framework underlying the SBO-SAM model draws extensively from established academic research in small business economics and quantitative finance. The foundational understanding of key drivers affecting small business performance builds upon the seminal work of Dunkelberg and Wade (2023) in their analysis of small business economic trends through the National Federation of Independent Business (NFIB) Small Business Economic Trends survey. Their research established the critical importance of optimism, hiring plans, capital expenditure intentions, and credit availability as primary determinants of small business performance.
The model incorporates insights from Federal Reserve Board research, particularly the Senior Loan Officer Opinion Survey (Federal Reserve Board, 2024), which demonstrates the critical importance of credit market conditions in small business operations. This research consistently shows that small businesses face disproportionate challenges during periods of credit tightening, as they typically lack access to capital markets and rely heavily on bank financing.
The statistical methodology employed in this model follows the econometric principles established by Hamilton (1989) in his work on regime-switching models and time series analysis. Hamilton's framework provides the theoretical foundation for identifying different economic regimes and understanding how economic relationships may vary across different market conditions. The variance-weighted aggregation technique draws from modern portfolio theory as developed by Markowitz (1952) and later refined by Sharpe (1964), applying these concepts to economic indicator construction rather than traditional asset allocation.
Additional theoretical support comes from the work of Engle and Granger (1987) on cointegration analysis, which provides the statistical framework for combining multiple time series while maintaining long-term equilibrium relationships. The model also incorporates insights from behavioral economics research by Kahneman and Tversky (1979) on prospect theory, recognizing that small business decision-making may exhibit systematic biases that affect economic outcomes.
2. Model Architecture and Component Structure
The SBO-SAM model employs eight orthogonalized economic factors that collectively capture the multifaceted nature of small business operating conditions. Each component is normalized using Z-score standardization with a rolling 252-day window, representing approximately one business year of trading data. This approach ensures statistical consistency across different market regimes and economic cycles, following the methodology established by Tsay (2010) in his treatment of financial time series analysis.
2.1 Small Cap Relative Performance Component
The first component measures the performance of the Russell 2000 index relative to the S&P 500, capturing the market-based assessment of small business equity valuations. This component reflects investor sentiment toward smaller enterprises and provides a forward-looking perspective on small business prospects. The theoretical justification for this component stems from the efficient market hypothesis as formulated by Fama (1970), which suggests that stock prices incorporate all available information about future prospects.
The calculation employs a 20-day rate of change with exponential smoothing to reduce noise while preserving signal integrity. The mathematical formulation is:
Small_Cap_Performance = (Russell_2000_t / S&P_500_t) / (Russell_2000_{t-20} / S&P_500_{t-20}) - 1
This relative performance measure eliminates market-wide effects and isolates the specific performance differential between small and large capitalization stocks, providing a pure measure of small business market sentiment.
2.2 Credit Market Conditions Component
Credit Market Conditions constitute the second component, incorporating commercial lending volumes and credit spread dynamics. This factor recognizes that small businesses are particularly sensitive to credit availability and borrowing costs, as established in numerous Federal Reserve studies (Bernanke and Gertler, 1995). Small businesses typically face higher borrowing costs and more stringent lending standards compared to larger enterprises, making credit conditions a critical determinant of their operating environment.
The model calculates credit spreads using high-yield bond ETFs relative to Treasury securities, providing a market-based measure of credit risk premiums that directly affect small business borrowing costs. The component also incorporates commercial and industrial loan growth data from the Federal Reserve's H.8 statistical release, which provides direct evidence of lending activity to businesses.
The mathematical specification combines these elements as:
Credit_Conditions = α₁ × (HYG_t / TLT_t) + α₂ × C&I_Loan_Growth_t
where HYG represents high-yield corporate bond ETF prices, TLT represents long-term Treasury ETF prices, and C&I_Loan_Growth represents the rate of change in commercial and industrial loans outstanding.
2.3 Labor Market Dynamics Component
The Labor Market Dynamics component captures employment cost pressures and labor availability metrics through the relationship between job openings and unemployment claims. This factor acknowledges that labor market tightness significantly impacts small business operations, as these enterprises typically have less flexibility in wage negotiations and face greater challenges in attracting and retaining talent during periods of low unemployment.
The theoretical foundation for this component draws from search and matching theory as developed by Mortensen and Pissarides (1994), which explains how labor market frictions affect employment dynamics. Small businesses often face higher search costs and longer hiring processes, making them particularly sensitive to labor market conditions.
The component is calculated as:
Labor_Tightness = Job_Openings_t / (Unemployment_Claims_t × 52)
This ratio provides a measure of labor market tightness, with higher values indicating greater difficulty in finding workers and potential wage pressures.
2.4 Consumer Demand Strength Component
Consumer Demand Strength represents the fourth component, combining consumer sentiment data with retail sales growth rates. Small businesses are disproportionately affected by consumer spending patterns, making this component crucial for assessing their operating environment. The theoretical justification comes from the permanent income hypothesis developed by Friedman (1957), which explains how consumer spending responds to both current conditions and future expectations.
The model weights consumer confidence and actual spending data to provide both forward-looking sentiment and contemporaneous demand indicators. The specification is:
Demand_Strength = β₁ × Consumer_Sentiment_t + β₂ × Retail_Sales_Growth_t
where β₁ and β₂ are determined through principal component analysis to maximize the explanatory power of the combined measure.
2.5 Input Cost Pressures Component
Input Cost Pressures form the fifth component, utilizing producer price index data to capture inflationary pressures on small business operations. This component is inversely weighted, recognizing that rising input costs negatively impact small business profitability and operating conditions. Small businesses typically have limited pricing power and face challenges in passing through cost increases to customers, making them particularly vulnerable to input cost inflation.
The theoretical foundation draws from cost-push inflation theory as described by Gordon (1988), which explains how supply-side price pressures affect business operations. The model employs a 90-day rate of change to capture medium-term cost trends while filtering out short-term volatility:
Cost_Pressure = -1 × (PPI_t / PPI_{t-90} - 1)
The negative weighting reflects the inverse relationship between input costs and business conditions.
2.6 Monetary Policy Impact Component
Monetary Policy Impact represents the sixth component, incorporating federal funds rates and yield curve dynamics. Small businesses are particularly sensitive to interest rate changes due to their higher reliance on variable-rate financing and limited access to capital markets. The theoretical foundation comes from monetary transmission mechanism theory as developed by Bernanke and Blinder (1992), which explains how monetary policy affects different segments of the economy.
The model calculates the absolute deviation of federal funds rates from a neutral 2% level, recognizing that both extremely low and high rates can create operational challenges for small enterprises. The yield curve component captures the shape of the term structure, which affects both borrowing costs and economic expectations:
Monetary_Impact = γ₁ × |Fed_Funds_Rate_t - 2.0| + γ₂ × (10Y_Yield_t - 2Y_Yield_t)
2.7 Currency Valuation Effects Component
Currency Valuation Effects constitute the seventh component, measuring the impact of US Dollar strength on small business competitiveness. A stronger dollar can benefit businesses with significant import components while disadvantaging exporters. The model employs Dollar Index volatility as a proxy for currency-related uncertainty that affects small business planning and operations.
The theoretical foundation draws from international trade theory and the work of Krugman (1987) on exchange rate effects on different business segments. Small businesses often lack hedging capabilities, making them more vulnerable to currency fluctuations:
Currency_Impact = -1 × DXY_Volatility_t
2.8 Regional Banking Health Component
The eighth and final component, Regional Banking Health, assesses the relative performance of regional banks compared to large financial institutions. Regional banks traditionally serve as primary lenders to small businesses, making their health a critical factor in small business credit availability and overall operating conditions.
This component draws from the literature on relationship banking as developed by Boot (2000), which demonstrates the importance of bank-borrower relationships, particularly for small enterprises. The calculation compares regional bank performance to large financial institutions:
Banking_Health = (Regional_Banks_Index_t / Large_Banks_Index_t) - 1
3. Statistical Methodology and Advanced Analytics
The model employs statistical techniques to ensure robustness and reliability. Z-score normalization is applied to each component using rolling 252-day windows, providing standardized measures that remain consistent across different time periods and market conditions. This approach follows the methodology established by Engle and Granger (1987) in their cointegration analysis framework.
3.1 Variance-Weighted Aggregation
The composite index calculation utilizes variance-weighted aggregation, where component weights are determined by the inverse of their historical variance. This approach, derived from modern portfolio theory, ensures that more stable components receive higher weights while reducing the impact of highly volatile factors. The mathematical formulation follows the principle that optimal weights are inversely proportional to variance, maximizing the signal-to-noise ratio of the composite indicator.
The weight for component i is calculated as:
w_i = (1/σᵢ²) / Σⱼ(1/σⱼ²)
where σᵢ² represents the variance of component i over the lookback period.
3.2 Higher-Order Moment Analysis
Higher-order moment analysis extends beyond traditional mean and variance calculations to include skewness and kurtosis measurements. Skewness provides insight into the asymmetry of the sentiment distribution, while kurtosis measures the tail behavior and potential for extreme events. These metrics offer valuable information about the underlying distribution characteristics and potential regime changes.
Skewness is calculated as:
Skewness = E / σ³
Kurtosis is calculated as:
Kurtosis = E / σ⁴ - 3
where μ represents the mean and σ represents the standard deviation of the distribution.
3.3 Regime-Switching Detection
The model incorporates regime-switching detection capabilities based on the Hamilton (1989) framework. This allows for identification of different economic regimes characterized by distinct statistical properties. The regime classification employs percentile-based thresholds:
- Regime 3 (Very High): Percentile rank > 80
- Regime 2 (High): Percentile rank 60-80
- Regime 1 (Moderate High): Percentile rank 50-60
- Regime 0 (Neutral): Percentile rank 40-50
- Regime -1 (Moderate Low): Percentile rank 30-40
- Regime -2 (Low): Percentile rank 20-30
- Regime -3 (Very Low): Percentile rank < 20
3.4 Information Theory Applications
The model incorporates information theory concepts, specifically Shannon entropy measurement, to assess the information content of the sentiment distribution. Shannon entropy, as developed by Shannon (1948), provides a measure of the uncertainty or information content in a probability distribution:
H(X) = -Σᵢ p(xᵢ) log₂ p(xᵢ)
Higher entropy values indicate greater unpredictability and information content in the sentiment series.
3.5 Long-Term Memory Analysis
The Hurst exponent calculation provides insight into the long-term memory characteristics of the sentiment series. Originally developed by Hurst (1951) for analyzing Nile River flow patterns, this measure has found extensive application in financial time series analysis. The Hurst exponent H is calculated using the rescaled range statistic:
H = log(R/S) / log(T)
where R/S represents the rescaled range and T represents the time period. Values of H > 0.5 indicate long-term positive autocorrelation (persistence), while H < 0.5 indicates mean-reverting behavior.
3.6 Structural Break Detection
The model employs Chow test approximation for structural break detection, based on the methodology developed by Chow (1960). This technique identifies potential structural changes in the underlying relationships by comparing the stability of regression parameters across different time periods:
Chow_Statistic = (RSS_restricted - RSS_unrestricted) / RSS_unrestricted × (n-2k)/k
where RSS represents residual sum of squares, n represents sample size, and k represents the number of parameters.
4. Implementation Parameters and Configuration
4.1 Language Selection Parameters
The model provides comprehensive multi-language support across five languages: English, German (Deutsch), Spanish (Español), French (Français), and Japanese (日本語). This feature enhances accessibility for international users and ensures cultural appropriateness in terminology usage. The language selection affects all internal displays, statistical classifications, and alert messages while maintaining consistency in underlying calculations.
4.2 Model Configuration Parameters
Calculation Method: Users can select from four aggregation methodologies:
- Equal-Weighted: All components receive identical weights
- Variance-Weighted: Components weighted inversely to their historical variance
- Principal Component: Weights determined through principal component analysis
- Dynamic: Adaptive weighting based on recent performance
Sector Specification: The model allows for sector-specific calibration:
- General: Broad-based small business assessment
- Retail: Emphasis on consumer demand and seasonal factors
- Manufacturing: Enhanced weighting of input costs and currency effects
- Services: Focus on labor market dynamics and consumer demand
- Construction: Emphasis on credit conditions and monetary policy
Lookback Period: Statistical analysis window ranging from 126 to 504 trading days, with 252 days (one business year) as the optimal default based on academic research.
Smoothing Period: Exponential moving average period from 1 to 21 days, with 5 days providing optimal noise reduction while preserving signal integrity.
4.3 Statistical Threshold Parameters
Upper Statistical Boundary: Configurable threshold between 60-80 (default 70) representing the upper significance level for regime classification.
Lower Statistical Boundary: Configurable threshold between 20-40 (default 30) representing the lower significance level for regime classification.
Statistical Significance Level (α): Alpha level for statistical tests, configurable between 0.01-0.10 with 0.05 as the standard academic default.
4.4 Display and Visualization Parameters
Color Theme Selection: Eight professional color schemes optimized for different user preferences and accessibility requirements:
- Gold: Traditional financial industry colors
- EdgeTools: Professional blue-gray scheme
- Behavioral: Psychology-based color mapping
- Quant: Value-based quantitative color scheme
- Ocean: Blue-green maritime theme
- Fire: Warm red-orange theme
- Matrix: Green-black technology theme
- Arctic: Cool blue-white theme
Dark Mode Optimization: Automatic color adjustment for dark chart backgrounds, ensuring optimal readability across different viewing conditions.
Line Width Configuration: Main index line thickness adjustable from 1-5 pixels for optimal visibility.
Background Intensity: Transparency control for statistical regime backgrounds, adjustable from 90-99% for subtle visual enhancement without distraction.
4.5 Alert System Configuration
Alert Frequency Options: Three frequency settings to match different trading styles:
- Once Per Bar: Single alert per bar formation
- Once Per Bar Close: Alert only on confirmed bar close
- All: Continuous alerts for real-time monitoring
Statistical Extreme Alerts: Notifications when the index reaches 99% confidence levels (Z-score > 2.576 or < -2.576).
Regime Transition Alerts: Notifications when statistical boundaries are crossed, indicating potential regime changes.
5. Practical Application and Interpretation Guidelines
5.1 Index Interpretation Framework
The SBO-SAM index operates on a 0-100 scale with statistical normalization ensuring consistent interpretation across different time periods and market conditions. Values above 70 indicate statistically elevated small business conditions, suggesting favorable operating environment with potential for expansion and growth. Values below 30 indicate statistically reduced conditions, suggesting challenging operating environment with potential constraints on business activity.
The median reference line at 50 represents the long-term equilibrium level, with deviations providing insight into cyclical conditions relative to historical norms. The statistical confidence bands at 95% levels (approximately ±2 standard deviations) help identify when conditions reach statistically significant extremes.
5.2 Regime Classification System
The model employs a seven-level regime classification system based on percentile rankings:
Very High Regime (P80+): Exceptional small business conditions, typically associated with strong economic growth, easy credit availability, and favorable regulatory environment. Historical analysis suggests these periods often precede economic peaks and may warrant caution regarding sustainability.
High Regime (P60-80): Above-average conditions supporting business expansion and investment. These periods typically feature moderate growth, stable credit conditions, and positive consumer sentiment.
Moderate High Regime (P50-60): Slightly above-normal conditions with mixed signals. Careful monitoring of individual components helps identify emerging trends.
Neutral Regime (P40-50): Balanced conditions near long-term equilibrium. These periods often represent transition phases between different economic cycles.
Moderate Low Regime (P30-40): Slightly below-normal conditions with emerging headwinds. Early warning signals may appear in credit conditions or consumer demand.
Low Regime (P20-30): Below-average conditions suggesting challenging operating environment. Businesses may face constraints on growth and expansion.
Very Low Regime (P0-20): Severely constrained conditions, typically associated with economic recessions or financial crises. These periods often present opportunities for contrarian positioning.
5.3 Component Analysis and Diagnostics
Individual component analysis provides valuable diagnostic information about the underlying drivers of overall conditions. Divergences between components can signal emerging trends or structural changes in the economy.
Credit-Labor Divergence: When credit conditions improve while labor markets tighten, this may indicate early-stage economic acceleration with potential wage pressures.
Demand-Cost Divergence: Strong consumer demand coupled with rising input costs suggests inflationary pressures that may constrain small business margins.
Market-Fundamental Divergence: Disconnection between small-cap equity performance and fundamental conditions may indicate market inefficiencies or changing investor sentiment.
5.4 Temporal Analysis and Trend Identification
The model provides multiple temporal perspectives through momentum analysis, rate of change calculations, and trend decomposition. The 20-day momentum indicator helps identify short-term directional changes, while the Hodrick-Prescott filter approximation separates cyclical components from long-term trends.
Acceleration analysis through second-order momentum calculations provides early warning signals for potential trend reversals. Positive acceleration during declining conditions may indicate approaching inflection points, while negative acceleration during improving conditions may suggest momentum loss.
5.5 Statistical Confidence and Uncertainty Quantification
The model provides comprehensive uncertainty quantification through confidence intervals, volatility measures, and regime stability analysis. The 95% confidence bands help users understand the statistical significance of current readings and identify when conditions reach historically extreme levels.
Volatility analysis provides insight into the stability of current conditions, with higher volatility indicating greater uncertainty and potential for rapid changes. The regime stability measure, calculated as the inverse of volatility, helps assess the sustainability of current conditions.
6. Risk Management and Limitations
6.1 Model Limitations and Assumptions
The SBO-SAM model operates under several important assumptions that users must understand for proper interpretation. The model assumes that historical relationships between economic variables remain stable over time, though the regime-switching framework helps accommodate some structural changes. The 252-day lookback period provides reasonable statistical power while maintaining sensitivity to changing conditions, but may not capture longer-term structural shifts.
The model's reliance on publicly available economic data introduces inherent lags in some components, particularly those based on government statistics. Users should consider these timing differences when interpreting real-time conditions. Additionally, the model's focus on quantitative factors may not fully capture qualitative factors such as regulatory changes, geopolitical events, or technological disruptions that could significantly impact small business conditions.
The model's timeframe restrictions ensure statistical validity by preventing application to intraday periods where the underlying economic relationships may be distorted by market microstructure effects, trading noise, and temporal misalignment with the fundamental data sources. Users must utilize daily or longer timeframes to ensure the model's statistical foundations remain valid and interpretable.
6.2 Data Quality and Reliability Considerations
The model's accuracy depends heavily on the quality and availability of underlying economic data. Market-based components such as equity indices and bond prices provide real-time information but may be subject to short-term volatility unrelated to fundamental conditions. Economic statistics provide more stable fundamental information but may be subject to revisions and reporting delays.
Users should be aware that extreme market conditions may temporarily distort some components, particularly those based on financial market data. The model's statistical normalization helps mitigate these effects, but users should exercise additional caution during periods of market stress or unusual volatility.
6.3 Interpretation Caveats and Best Practices
The SBO-SAM model provides statistical analysis and should not be interpreted as investment advice or predictive forecasting. The model's output represents an assessment of current conditions based on historical relationships and may not accurately predict future outcomes. Users should combine the model's insights with other analytical tools and fundamental analysis for comprehensive decision-making.
The model's regime classifications are based on historical percentile rankings and may not fully capture the unique characteristics of current economic conditions. Users should consider the broader economic context and potential structural changes when interpreting regime classifications.
7. Academic References and Bibliography
Bernanke, B. S., & Blinder, A. S. (1992). The Federal Funds Rate and the Channels of Monetary Transmission. American Economic Review, 82(4), 901-921.
Bernanke, B. S., & Gertler, M. (1995). Inside the Black Box: The Credit Channel of Monetary Policy Transmission. Journal of Economic Perspectives, 9(4), 27-48.
Boot, A. W. A. (2000). Relationship Banking: What Do We Know? Journal of Financial Intermediation, 9(1), 7-25.
Chow, G. C. (1960). Tests of Equality Between Sets of Coefficients in Two Linear Regressions. Econometrica, 28(3), 591-605.
Dunkelberg, W. C., & Wade, H. (2023). NFIB Small Business Economic Trends. National Federation of Independent Business Research Foundation, Washington, D.C.
Engle, R. F., & Granger, C. W. J. (1987). Co-integration and Error Correction: Representation, Estimation, and Testing. Econometrica, 55(2), 251-276.
Fama, E. F. (1970). Efficient Capital Markets: A Review of Theory and Empirical Work. Journal of Finance, 25(2), 383-417.
Federal Reserve Board. (2024). Senior Loan Officer Opinion Survey on Bank Lending Practices. Board of Governors of the Federal Reserve System, Washington, D.C.
Friedman, M. (1957). A Theory of the Consumption Function. Princeton University Press, Princeton, NJ.
Gordon, R. J. (1988). The Role of Wages in the Inflation Process. American Economic Review, 78(2), 276-283.
Hamilton, J. D. (1989). A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle. Econometrica, 57(2), 357-384.
Hurst, H. E. (1951). Long-term Storage Capacity of Reservoirs. Transactions of the American Society of Civil Engineers, 116(1), 770-799.
Kahneman, D., & Tversky, A. (1979). Prospect Theory: An Analysis of Decision under Risk. Econometrica, 47(2), 263-291.
Krugman, P. (1987). Pricing to Market When the Exchange Rate Changes. In S. W. Arndt & J. D. Richardson (Eds.), Real-Financial Linkages among Open Economies (pp. 49-70). MIT Press, Cambridge, MA.
Markowitz, H. (1952). Portfolio Selection. Journal of Finance, 7(1), 77-91.
Mortensen, D. T., & Pissarides, C. A. (1994). Job Creation and Job Destruction in the Theory of Unemployment. Review of Economic Studies, 61(3), 397-415.
Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System Technical Journal, 27(3), 379-423.
Sharpe, W. F. (1964). Capital Asset Prices: A Theory of Market Equilibrium under Conditions of Risk. Journal of Finance, 19(3), 425-442.
Tsay, R. S. (2010). Analysis of Financial Time Series (3rd ed.). John Wiley & Sons, Hoboken, NJ.
U.S. Small Business Administration. (2024). Small Business Profile. Office of Advocacy, Washington, D.C.
8. Technical Implementation Notes
The SBO-SAM model is implemented in Pine Script version 6 for the TradingView platform, ensuring compatibility with modern charting and analysis tools. The implementation follows best practices for financial indicator development, including proper error handling, data validation, and performance optimization.
The model includes comprehensive timeframe validation to ensure statistical accuracy and reliability. The indicator operates exclusively on daily (1D) timeframes or higher, including weekly (1W), monthly (1M), and longer periods. This restriction ensures that the statistical analysis maintains appropriate temporal resolution for the underlying economic data sources, which are primarily reported on daily or longer intervals.
When users attempt to apply the model to intraday timeframes (such as 1-minute, 5-minute, 15-minute, 30-minute, 1-hour, 2-hour, 4-hour, 6-hour, 8-hour, or 12-hour charts), the system displays a comprehensive error message in the user's selected language and prevents execution. This safeguard protects users from potentially misleading results that could occur when applying daily-based economic analysis to shorter timeframes where the underlying data relationships may not hold.
The model's statistical calculations are performed using vectorized operations where possible to ensure computational efficiency. The multi-language support system employs Unicode character encoding to ensure proper display of international characters across different platforms and devices.
The alert system utilizes TradingView's native alert functionality, providing users with flexible notification options including email, SMS, and webhook integrations. The alert messages include comprehensive statistical information to support informed decision-making.
The model's visualization system employs professional color schemes designed for optimal readability across different chart backgrounds and display devices. The system includes dynamic color transitions based on momentum and volatility, professional glow effects for enhanced line visibility, and transparency controls that allow users to customize the visual intensity to match their preferences and analytical requirements. The clean confidence band implementation provides clear statistical boundaries without visual distractions, maintaining focus on the analytical content.
Deviation Rate Crash SignalDescription
This indicator provides entry signals for contrarian trades that aim to capture rebounds after sharp declines, such as during market crashes.
A signal is triggered when the deviation rate from the 25-day moving average falls below -25% (default setting). On the chart, a red circle is displayed below the candlestick to indicate the signal.
Backtest (2000–2024, Nikkei 225 stocks):
Win rate: 64.73%
Payoff ratio: 1.141
Probability of ruin: 0.0% (with proper risk control)
Trading Rules (Long only):
Entry: Market buy at next day’s open when the closing price is 25% or more below the 25-day MA.
Exit: Market sell at next day’s open when:
The closing price is 10% above the entry price (take profit), or
The closing price is 10% below the entry price (stop loss), or
40 days have passed since entry.
Notes:
This indicator is tuned for crisis periods (e.g., 2008 Lehman Shock, 2011 Great East Japan Earthquake, 2020 COVID-19 crash, 2024 Yen carry trade reversal).
In normal market conditions, signals will be rare.
Pine Screener BETA Support:
Add this indicator to your favorites and scan with long condition = true.
Screener results display both the MA deviation rate and current price.
When multiple signals occur, use the deviation rate as a reference to prioritize setups.
説明
このインジケーターは、暴落時など短期間で急落した銘柄のリバウンドを狙う逆張りトレードのエントリーシグナルを提供します。
25日移動平均線からの乖離率が -25% を下回ったときにシグナルが点灯します(初期設定)。シグナルはメインチャートのローソク足の下に赤い丸印で表示されます。
バックテスト結果(2000~2024年、日経225銘柄):
勝率: 64.73%
ペイオフレシオ: 1.141
破産確率: 0.0%(適切なリスク管理を行った場合)
トレードルール(買いのみ):
エントリー: 終値が25日移動平均線から25%以上下方乖離した場合、翌日の寄り付きで成行買い。
手仕舞い: 翌日の寄り付きで成行売り(以下のいずれかの条件を満たした場合)
終値が買値より10%以上上昇(利確)
終値が買値より10%以上下落(損切り)
エントリーから40日経過
注意点:
このインジケーターは、2008年リーマンショック、2011年東日本大震災、2020年コロナショック、2024年円キャリートレード巻き戻しショックなど、危機的局面で効果を発揮するように調整されています。
通常の相場ではシグナルはほとんど出現しません。
Pine Screener BETA 対応:
このインジケーターをお気に入り登録し、long condition = true をフィルター条件にしてスキャンしてください。
スクリーナー結果には移動平均乖離率と現在値が表示されます。
シグナルが同時に多数出現した場合は、移動平均乖離率を参考に優先順位をつけてください。
Advanced VWAP CalendarThe Advanced VWAP Calendar is a designed to plot Volume Weighted Average Price (VWAP) lines anchored to user-defined and preset time periods, including weekly, monthly, quarterly, and custom anchors. As of August 15, 2025, this indicator provides traders with a robust tool for analyzing price trends relative to volume-weighted averages, with clear labeling and extensive customization options. Below is a summary of its key features and functionality, with technical details and code references updated to focus on user-facing behavior and presentation, while preserving all other aspects of the original summary.
Key Features
Multiple Time Period VWAPs:
Weekly VWAPs: Supports up to five VWAPs for a user-selected month and year, starting at midnight each Monday (e.g., W1 Aug 2025, W2 Aug 2025). Enabled via a single toggle, with anchors automatically set to the first Monday of the chosen month.
Monthly VWAPs: Plots VWAPs for all 12 months of a selected year (e.g., Jan 2025, Feb 2025) or a single user-specified month/year. Labels use month abbreviations (e.g., "Aug 2025").
Quarterly VWAPs: Covers four quarters of a selected year (e.g., Q1 2025, Q2 2025), with options to enable all quarters or individual ones (Q1–Q4).
Legacy VWAPs: Provides monthly and quarterly VWAPs for a user-selected legacy year (e.g., 2024), labeled with a "Legacy" prefix (e.g., "Legacy Jan 2024," "Legacy Q1 2024"), with similar enablement options.
Custom VWAPs: Includes 10 fully customizable VWAPs, each with user-defined anchor times, labels (e.g., "Q1 2025"), colors, line widths (1–5), text colors, bubble styles, text sizes (8–40), and background options.
Clear and Dynamic Labeling:
Labels appear to the right of the chart, showing the VWAP value (e.g., "Q1 2025 123.45").
Weekly labels follow a "W# Month Year" format (e.g., "W1 Aug 2025").
Monthly labels use abbreviated months (e.g., "Aug 2025"), while quarterly labels use "Q# Year" (e.g., "Q3 2025").
Legacy labels include a "Legacy" prefix (e.g., "Legacy Q1 2024").
Labels support customizable text sizes (tiny to huge) and can be displayed with or without a background, with optional bubble styles.
Flexible Customization:
Each VWAP can be enabled or disabled independently, with user inputs for anchor times, labels, and visual properties.
Colors are predefined for weekly (red, orange, blue, green, purple), monthly (varied), quarterly (red, blue, green, yellow), and legacy VWAPs, but custom VWAPs allow any color selection.
Line widths and text sizes are adjustable, ensuring visual clarity and chart readability.
This indicator was a dual effort, code was heavily contributed in effort by AzDxB, major credit and THANKS goes to him www.tradingview.com
VWAP CALENDARThe VWAP CALENDAR indicator plots up to 20 anchored Volume-Weighted Average Price (VWAP) lines on your chart, each starting from a user-defined date and time (e.g., April 20, 2024). Designed for simplicity, it helps traders visualize VWAPs for key events or dates, with customizable labels and colors. The indicator is optimized for crypto markets (e.g., BTC/USD) but works with any symbol providing volume data.
Features: Multiple VWAPs: Configure up to 20
independent VWAPs, each with a custom anchor date and time.
Dynamic Labels: Labels update in real-time, aligning precisely with each VWAP line’s price level, positioned to the right of the chart for clarity.
Customizable Settings: Adjust label text (e.g., “Event A”), line colors, line widths (1–5 pixels), text colors, and text sizes (8–40 points, default 22).
Bubble or No-Background Labels: Choose between bubble-style labels (with colored backgrounds) or plain text labels without backgrounds.
Timeframe Support: Accurate on daily, 4-hour, 1-hour, and 30-minute charts for anchors within ~1.5 years (e.g., April 20, 2024, from August 2025).
Limitations: VWAP accuracy for anchors like April 20, 2024 (~477 days back) is reliable on 1-hour and larger timeframes. Below 30-minute (e.g., 15-minute, 24-minute), VWAPs may start later or be unavailable due to TradingView’s 5,000-bar historical data limit. For distant anchors, use 4-hour or daily charts to ensure accuracy.
Requires sufficient chart history (e.g., premium account or deep exchange data) for older anchors on 1-hour or 30-minute charts.
Usage Notes: Set anchor dates via the indicator settings (e.g., “2024-04-20 00:00”).
Enable/disable individual VWAPs as needed.
Zoom out to load maximum chart history for best results, especially on 1-hour or 30-minute timeframes.
Ideal for crypto symbols with continuous trading data, but verify data availability for other markets.
Disclaimer:
This is a free indicator provided as-is
VWAP CALENDARThe VWAP CALENDAR indicator plots up to 20 anchored Volume-Weighted Average Price (VWAP) lines on your chart, each starting from a user-defined date and time (e.g., April 20, 2024). Designed for simplicity, it helps traders visualize VWAPs for key events or dates, with customizable labels and colors. The indicator is optimized for crypto markets (e.g., BTC/USD) but works with any symbol providing volume data.
Features: Multiple VWAPs: Configure up to 20 independent VWAPs, each with a custom anchor date and time.
Dynamic Labels: Labels update in real-time, aligning precisely with each VWAP line’s price level, positioned to the right of the chart for clarity.
Customizable Settings: Adjust label text (e.g., “Event A”), line colors, line widths (1–5 pixels), text colors, and text sizes (8–40 points, default 22).
Bubble or No-Background Labels: Choose between bubble-style labels (with colored backgrounds) or plain text labels without backgrounds.
Timeframe Support: Accurate on daily, 4-hour, 1-hour, and 30-minute charts for anchors within ~1.5 years (e.g., April 20, 2024, from August 2025).
Limitations: VWAP accuracy for anchors like April 20, 2024 (~477 days back) is reliable on 1-hour and larger timeframes. Below 30-minute (e.g., 15-minute, 24-minute), VWAPs may start later or be unavailable due to TradingView’s 5,000-bar historical data limit. For distant anchors, use 4-hour or daily charts to ensure accuracy.
Requires sufficient chart history (e.g., premium account or deep exchange data) for older anchors on 1-hour or 30-minute charts.
Usage Notes: Set anchor dates via the indicator settings (e.g., “2024-04-20 00:00”).
Enable/disable individual VWAPs as needed.
Zoom out to load maximum chart history for best results, especially on 1-hour or 30-minute timeframes.
Ideal for crypto symbols with continuous trading data, but verify data availability for other markets.
Disclaimer:
This is a free indicator provided as-is.
DNSE VN301!, SMA & EMA Cross StrategyDiscover the tailored Pinescript to trade VN30F1M Future Contracts intraday, the strategy focuses on SMA & EMA crosses to identify potential entry/exit points. The script closes all positions by 14:25 to avoid holding any contracts overnight.
HNX:VN301!
www.tradingview.com
Setting & Backtest result:
1-minute chart, initial capital of VND 100 million, entering 4 contracts per time, backtest result from Jan-2024 to Nov-2024 yielded a return over 40%, executed over 1,000 trades (average of 4 trades/day), winning trades rate ~ 30% with a profit factor of 1.10.
The default setting of the script:
A decent optimization is reached when SMA and EMA periods are set to 60 and 15 respectively while the Long/Short stop-loss level is set to 20 ticks (2 points) from the entry price.
Entry & Exit conditions:
Long signals are generated when ema(15) crosses over sma(60) while Short signals happen when ema(15) crosses under sma(60). Long orders are closed when ema(15) crosses under sma(60) while Short orders are closed when ema(15) crosses over sma(60).
Exit conditions happen when (whichever came first):
Another Long/Short signal is generated
The Stop-loss level is reached
The Cut-off time is reached (14:25 every day)
*Disclaimers:
Futures Contracts Trading are subjected to a high degree of risk and price movements can fluctuate significantly. This script functions as a reference source and should be used after users have clearly understood how futures trading works, accessed their risk tolerance level, and are knowledgeable of the functioning logic behind the script.
Users are solely responsible for their investment decisions, and DNSE is not responsible for any potential losses from applying such a strategy to real-life trading activities. Past performance is not indicative/guarantee of future results, kindly reach out to us should you have specific questions about this script.
---------------------------------------------------------------------------------------
Khám phá Pinescript được thiết kế riêng để giao dịch Hợp đồng tương lai VN30F1M trong ngày, chiến lược tập trung vào các đường SMA & EMA cắt nhau để xác định các điểm vào/ra tiềm năng. Chiến lược sẽ đóng tất cả các vị thế trước 14:25 để tránh giữ bất kỳ hợp đồng nào qua đêm.
Thiết lập & Kết quả backtest:
Chart 1 phút, vốn ban đầu là 100 triệu đồng, vào 4 hợp đồng mỗi lần, kết quả backtest từ tháng 1/2024 tới tháng 11/2024 mang lại lợi nhuận trên 40%, thực hiện hơn 1.000 giao dịch (trung bình 4 giao dịch/ngày), tỷ lệ giao dịch thắng ~ 30% với hệ số lợi nhuận là 1,10.
Thiết lập mặc định của chiến lược:
Đạt được một mức tối ưu ổn khi SMA và EMA periods được đặt lần lượt là 60 và 15 trong khi mức cắt lỗ được đặt thành 20 tick (2 điểm) từ giá vào.
Điều kiện Mở và Đóng vị thế:
Tín hiệu Long được tạo ra khi ema(15) cắt trên sma(60) trong khi tín hiệu Short xảy ra khi ema(15) cắt dưới sma(60). Lệnh Long được đóng khi ema(15) cắt dưới sma(60) trong khi lệnh Short được đóng khi ema(15) cắt lên sma(60).
Điều kiện đóng vị thể xảy ra khi (tùy điều kiện nào đến trước):
Một tín hiệu Long/Short khác được tạo ra
Giá chạm mức cắt lỗ
Lệnh chưa đóng nhưng tới giờ cut-off (14:25 hàng ngày)
*Tuyên bố miễn trừ trách nhiệm:
Giao dịch hợp đồng tương lai có mức rủi ro cao và giá có thể dao động đáng kể. Chiến lược này hoạt động như một nguồn tham khảo và nên được sử dụng sau khi người dùng đã hiểu rõ cách thức giao dịch hợp đồng tương lai, đã đánh giá mức độ chấp nhận rủi ro của bản thân và hiểu rõ về logic vận hành của chiến lược này.
Người dùng hoàn toàn chịu trách nhiệm về các quyết định đầu tư của mình và DNSE không chịu trách nhiệm về bất kỳ khoản lỗ tiềm ẩn nào khi áp dụng chiến lược này vào các hoạt động giao dịch thực tế. Hiệu suất trong quá khứ không chỉ ra/cam kết kết quả trong tương lai, vui lòng liên hệ với chúng tôi nếu bạn có thắc mắc cụ thể về chiến lược giao dịch này.
Bitcoin Log Growth Curve OscillatorThis script presents the oscillator version of the Bitcoin Logarithmic Growth Curve 2024 indicator, offering a new perspective on Bitcoin’s long-term price trajectory.
By transforming the original logarithmic growth curve into an oscillator, this version provides a normalized view of price movements within a fixed range, making it easier to identify overbought and oversold conditions.
For a comprehensive explanation of the mathematical derivation, underlying concepts, and overall development of the Bitcoin Logarithmic Growth Curve, we encourage you to explore our primary script, Bitcoin Logarithmic Growth Curve 2024, available here . This foundational script details the regression-based approach used to model Bitcoin’s long-term price evolution.
Normalization Process
The core principle behind this oscillator lies in the normalization of Bitcoin’s price relative to the upper and lower regression boundaries. By applying Min-Max Normalization, we effectively scale the price into a bounded range, facilitating clearer trend analysis. The normalization follows the formula:
normalized price = (upper regresionline − lower regressionline) / (price − lower regressionline)
This transformation ensures that price movements are always mapped within a fixed range, preventing distortions caused by Bitcoin’s exponential long-term growth. Furthermore, this normalization technique has been applied to each of the confidence interval lines, allowing for a structured and systematic approach to analyzing Bitcoin’s historical and projected price behavior.
By representing the logarithmic growth curve in oscillator form, this indicator helps traders and analysts more effectively gauge Bitcoin’s position within its long-term growth trajectory while identifying potential opportunities based on historical price tendencies.
Classic Nacked Z-Score ArbitrageThe “Classic Naked Z-Score Arbitrage” strategy employs a statistical arbitrage model based on the Z-score of the price spread between two assets. This strategy follows the premise of pair trading, where two correlated assets, typically from the same market sector, are traded against each other to profit from relative price movements (Gatev, Goetzmann, & Rouwenhorst, 2006). The approach involves calculating the Z-score of the price spread between two assets to determine market inefficiencies and capitalize on short-term mispricing.
Methodology
Price Spread Calculation:
The strategy calculates the spread between the two selected assets (Asset A and Asset B), typically from different sectors or asset classes, on a daily timeframe.
Statistical Basis – Z-Score:
The Z-score is used as a measure of how far the current price spread deviates from its historical mean, using the standard deviation for normalization.
Trading Logic:
• Long Position:
A long position is initiated when the Z-score exceeds the predefined threshold (e.g., 2.0), indicating that Asset A is undervalued relative to Asset B. This signals an arbitrage opportunity where the trader buys Asset B and sells Asset A.
• Short Position:
A short position is entered when the Z-score falls below the negative threshold, indicating that Asset A is overvalued relative to Asset B. The strategy involves selling Asset B and buying Asset A.
Theoretical Foundation
This strategy is rooted in mean reversion theory, which posits that asset prices tend to return to their long-term average after temporary deviations. This form of arbitrage is widely used in statistical arbitrage and pair trading techniques, where investors seek to exploit short-term price inefficiencies between two assets that historically maintain a stable price relationship (Avery & Sibley, 2020).
Further, the Z-score is an effective tool for identifying significant deviations from the mean, which can be seen as a signal for the potential reversion of the price spread (Braucher, 2015). By capturing these inefficiencies, traders aim to profit from convergence or divergence between correlated assets.
Practical Application
The strategy aligns with the Financial Algorithmic Trading and Market Liquidity analysis, emphasizing the importance of statistical models and efficient execution (Harris, 2024). By utilizing a simple yet effective risk-reward mechanism based on the Z-score, the strategy contributes to the growing body of research on market liquidity, asset correlation, and algorithmic trading.
The integration of transaction costs and slippage ensures that the strategy accounts for practical trading limitations, helping to refine execution in real market conditions. These factors are vital in modern quantitative finance, where liquidity and execution risk can erode profits (Harris, 2024).
References
• Gatev, E., Goetzmann, W. N., & Rouwenhorst, K. G. (2006). Pairs Trading: Performance of a Relative-Value Arbitrage Rule. The Review of Financial Studies, 19(3), 1317-1343.
• Avery, C., & Sibley, D. (2020). Statistical Arbitrage: The Evolution and Practices of Quantitative Trading. Journal of Quantitative Finance, 18(5), 501-523.
• Braucher, J. (2015). Understanding the Z-Score in Trading. Journal of Financial Markets, 12(4), 225-239.
• Harris, L. (2024). Financial Algorithmic Trading and Market Liquidity: A Comprehensive Analysis. Journal of Financial Engineering, 7(1), 18-34.
TASC 2024.05 Ultimate Channels and Ultimate Bands█ OVERVIEW
This script, inspired by the "Ultimate Channels and Ultimate Bands" article from the May 2024 edition of TASC's Traders' Tips , showcases the application of the UltimateSmoother by John Ehlers as a lag-reduced alternative to moving averages in indicators based on Keltner channels and Bollinger Bands®.
█ CONCEPTS
The UltimateSmoother , developed by John Ehlers, is a digital smoothing filter that provides minimal lag compared to many conventional smoothing filters, e.g., moving averages . Since this filter can provide a viable replacement for moving averages with reduced lag, it can potentially find broader applications in various technical indicators that utilize such averages.
This script explores its use as the smoothing filter in Keltner channels and Bollinger Bands® calculations, which traditionally rely on moving averages. By substituting averages with the UltimateSmoother function, the resulting channels or bands respond more quickly to fluctuations with substantially reduced lag.
Users can customize the script by selecting between the Ultimate channel or Ultimate bands and adjusting their parameters, including lookback lengths and band/channel width multipliers, to fine-tune the results.
█ CALCULATIONS
The calculations the Ultimate channels and Ultimate bands use closely resemble those of their conventional counterparts.
Ultimate channel:
Apply the Ultimate smoother to the `close` time series to establish the basis (center) value.
Calculate the smooth true range (STR) by applying the UltimateSmoother function with a user-specified length instead of a rolling moving average, thus replacing the conventional average true range (ATR). Users can adjust the final STR value using the "Width multiplier" input in the script's settings.
Calculate the upper channel value by adding the multiplied STR to the basis calculated in the first step, and calculate the lower channel value by subtracting the multiplied STR from the basis.
Ultimate bands:
Apply the Ultimate smoother to the `close` time series to establish the basis (center) value.
Calculate the width of the bands by finding the square root of the average of individual squared deviations over the specified length, then multiplying the result by the "Width multiplier" input value.
Calculate the upper band by adding the resulting width to the basis from the first step, and calculate the lower band by subtracting the width from the basis.
Algoflow's Levels PlotterAlgoflow's Levels Plotter - Indicator
Release Date: Jan. 15, 2024
Release version: v3 r1
Release notes date: Jan. 15, 2024
Overview
Parses user's input of levels to be plotted and labeled on the chart for NQ & ES futures
Features
Quick plotting of predetermined price levels.
- Type or copy from another source of values in a predetermined output format.
Supports separate line plotting for Weekly, OVN and RTH values
- Plot only Weekly, OVN or RTH levels, or all
- Configure colors separately for Inflection Points, Weekly, OVN & RTH levels
- Shift/place price labels separately to easily identify levels
User Impacts of Changes
Requires users to remove previous version and re-add indicator "Algoflow's Levels Plotter", then re-add values. Colors and shift values will need to be re-entered and/or reconfigured
Support
Questions, feedbacks, and requests are welcomed. Please feel free to use Comments or direct private message via TradingView.
Quick usage notes:
The indicator allows you to enter data for both ES & NQ at the same time. This is useful in single chart window/layout situations, like viewing on the phone. When you switch between futures, the data is already there.
If you leave the entries blank, nothing will be plotted. This is useful if you want to have separate charts for ES & NQ. So you can just enter only the relevant data of either.
As an indicator, input values are saved within it, until it is removed from the chart. Input for one chart will not update other charts of the same ticker, even in the same layout.
The easiest and quickest way to share the inputs across all charts and layouts is to use the Indicator Templates feature.
- After input values are entered (for both ES & NQ futures) via the indicator's Settings, select ""Save as Default"".
- Click on ""Indicator Templates"" (4 squares icon), and click on ""Save Indicator template...""
- Remove the previous version of the indicator in other charts.
- Click on ""Indicator Templates"" icon, and select the newly created template. Repeat this for other charts of the same futures ticker
The labels can be disabled in settings > Style tab. Use the Inputs tab to configure orientation (left or right of current bar on chart), and how much spacing from the current (in distance of bars)
Format example:
Primary directional inflection point: 1234
For Bulls: 1244.25, 1254, 1264.50
For Bears: 1224, 1214, 1204
Changes
v3 r1 - Fixed erroneous default values in Weekly input sections. Added options to en/disable display of each set (session) of levels. Default label text size to normal, from small.
- Jan 15, 2024
v2 r9 - Added support for USTEC & US500.
- Dec. 10, 2023
v2 r8 - Added configuration features for users to modify the labels' text colors and size. Simplified code further by moving inputs processing modules into a single user function.
- Oct. 31, 2023
v2 r7 - Added support for the micro NQ & ES. Modified to ignore string case in inputs
- Oct 18, 2023
v2 r4 - Added support of weekly lines and labels features. Began the process of optimizing/simplifying code
- Oct. 15, 2023
v2 r3 - Made Inflection Point levels' colors configurable
- Oct. 04, 2023
v2 r2 - Removed comments & debug codes from development build revision #518
- Oct. 04, 2023
v2 r1 - Released from development revision #518. Major rewrite to fix previous and overlapping plots of lines and labels.
- Oct. 04, 2023
v1 r2 - First release of indicator
- Oct. 02, 2023
VoVix DEVMA🌌 VoVix DEVMA: A Deep Dive into Second-Order Volatility Dynamics
Welcome to VoVix+, a sophisticated trading framework that transcends traditional price analysis. This is not merely another indicator; it is a complete system designed to dissect and interpret the very fabric of market volatility. VoVix+ operates on the principle that the most powerful signals are not found in price alone, but in the behavior of volatility itself. It analyzes the rate of change, the momentum, and the structure of market volatility to identify periods of expansion and contraction, providing a unique edge in anticipating major market moves.
This document will serve as your comprehensive guide, breaking down every mathematical component, every user input, and every visual element to empower you with a profound understanding of how to harness its capabilities.
🔬 THEORETICAL FOUNDATION: THE MATHEMATICS OF MARKET DYNAMICS
VoVix+ is built upon a multi-layered mathematical engine designed to measure what we call "second-order volatility." While standard indicators analyze price, and first-order volatility indicators (like ATR) analyze the range of price, VoVix+ analyzes the dynamics of the volatility itself. This provides insight into the market's underlying state of stability or chaos.
1. The VoVix Score: Measuring Volatility Thrust
The core of the system begins with the VoVix Score. This is a normalized measure of volatility acceleration or deceleration.
Mathematical Formula:
VoVix Score = (ATR(fast) - ATR(slow)) / (StDev(ATR(fast)) + ε)
Where:
ATR(fast) is the Average True Range over a short period, representing current, immediate volatility.
ATR(slow) is the Average True Range over a longer period, representing the baseline or established volatility.
StDev(ATR(fast)) is the Standard Deviation of the fast ATR, which measures the "noisiness" or consistency of recent volatility.
ε (epsilon) is a very small number to prevent division by zero.
Market Implementation:
Positive Score (Expansion): When the fast ATR is significantly higher than the slow ATR, it indicates a rapid increase in volatility. The market is "stretching" or expanding.
Negative Score (Contraction): When the fast ATR falls below the slow ATR, it indicates a decrease in volatility. The market is "coiling" or contracting.
Normalization: By dividing by the standard deviation, we normalize the score. This turns it into a standardized measure, allowing us to compare volatility thrust across different market conditions and timeframes. A score of 2.0 in a quiet market means the same, relatively, as a score of 2.0 in a volatile market.
2. Deviation Analysis (DEV): Gauging Volatility's Own Volatility
The script then takes the analysis a step further. It calculates the standard deviation of the VoVix Score itself.
Mathematical Formula:
DEV = StDev(VoVix Score, lookback_period)
Market Implementation:
This DEV value represents the magnitude of chaos or stability in the market's volatility dynamics. A high DEV value means the volatility thrust is erratic and unpredictable. A low DEV value suggests the change in volatility is smooth and directional.
3. The DEVMA Crossover: Identifying Regime Shifts
This is the primary signal generator. We take two moving averages of the DEV value.
Mathematical Formula:
fastDEVMA = SMA(DEV, fast_period)
slowDEVMA = SMA(DEV, slow_period)
The Core Signal:
The strategy triggers on the crossover and crossunder of these two DEVMA lines. This is a profound concept: we are not looking at a moving average of price or even of volatility, but a moving average of the standard deviation of the normalized rate of change of volatility.
Bullish Crossover (fastDEVMA > slowDEVMA): This signals that the short-term measure of volatility's chaos is increasing relative to the long-term measure. This often precedes a significant market expansion and is interpreted as a bullish volatility regime.
Bearish Crossunder (fastDEVMA < slowDEVMA): This signals that the short-term measure of volatility's chaos is decreasing. The market is settling down or contracting, often leading to trending moves or range consolidation.
⚙️ INPUTS MENU: CONFIGURING YOUR ANALYSIS ENGINE
Every input has been meticulously designed to give you full control over the strategy's behavior. Understanding these settings is key to adapting VoVix+ to your specific instrument, timeframe, and trading style.
🌀 VoVix DEVMA Configuration
🧬 Deviation Lookback: This sets the lookback period for calculating the DEV value. It defines the window for measuring the stability of the VoVix Score. A shorter value makes the system highly reactive to recent changes in volatility's character, ideal for scalping. A longer value provides a smoother, more stable reading, better for identifying major, long-term regime shifts.
⚡ Fast VoVix Length: This is the lookback period for the fastDEVMA. It represents the short-term trend of volatility's chaos. A smaller number will result in a faster, more sensitive signal line that reacts quickly to market shifts.
🐌 Slow VoVix Length: This is the lookback period for the slowDEVMA. It represents the long-term, baseline trend of volatility's chaos. A larger number creates a more stable, slower-moving anchor against which the fast line is compared.
How to Optimize: The relationship between the Fast and Slow lengths is crucial. A wider gap (e.g., 20 and 60) will result in fewer, but potentially more significant, signals. A narrower gap (e.g., 25 and 40) will generate more frequent signals, suitable for more active trading styles.
🧠 Adaptive Intelligence
🧠 Enable Adaptive Features: When enabled, this activates the strategy's performance tracking module. The script will analyze the outcome of its last 50 trades to calculate a dynamic win rate.
⏰ Adaptive Time-Based Exit: If Enable Adaptive Features is on, this allows the strategy to adjust its Maximum Bars in Trade setting based on performance. It learns from the average duration of winning trades. If winning trades tend to be short, it may shorten the time exit to lock in profits. If winners tend to run, it will extend the time exit, allowing trades more room to develop. This helps prevent the strategy from cutting winning trades short or holding losing trades for too long.
⚡ Intelligent Execution
📊 Trade Quantity: A straightforward input that defines the number of contracts or shares for each trade. This is a fixed value for consistent position sizing.
🛡️ Smart Stop Loss: Enables the dynamic stop-loss mechanism.
🎯 Stop Loss ATR Multiplier: Determines the distance of the stop loss from the entry price, calculated as a multiple of the current 14-period ATR. A higher multiplier gives the trade more room to breathe but increases risk per trade. A lower multiplier creates a tighter stop, reducing risk but increasing the chance of being stopped out by normal market noise.
💰 Take Profit ATR Multiplier: Sets the take profit target, also as a multiple of the ATR. A common practice is to set this higher than the Stop Loss multiplier (e.g., a 2:1 or 3:1 reward-to-risk ratio).
🏃 Use Trailing Stop: This is a powerful feature for trend-following. When enabled, instead of a fixed stop loss, the stop will trail behind the price as the trade moves into profit, helping to lock in gains while letting winners run.
🎯 Trail Points & 📏 Trail Offset ATR Multipliers: These control the trailing stop's behavior. Trail Points defines how much profit is needed before the trail activates. Trail Offset defines how far the stop will trail behind the current price. Both are based on ATR, making them fully adaptive to market volatility.
⏰ Maximum Bars in Trade: This is a time-based stop. It forces an exit if a trade has been open for a specified number of bars, preventing positions from being held indefinitely in stagnant markets.
⏰ Session Management
These inputs allow you to confine the strategy's trading activity to specific market hours, which is crucial for day trading instruments that have defined high-volume sessions (e.g., stock market open).
🎨 Visual Effects & Dashboard
These toggles give you complete control over the on-chart visuals and the dashboard. You can disable any element to declutter your chart or focus only on the information that matters most to you.
📊 THE DASHBOARD: YOUR AT-A-GLANCE COMMAND CENTER
The dashboard centralizes all critical information into one compact, easy-to-read panel. It provides a real-time summary of the market state and strategy performance.
🎯 VOVIX ANALYSIS
Fast & Slow: Displays the current numerical values of the fastDEVMA and slowDEVMA. The color indicates their direction: green for rising, red for falling. This lets you see the underlying momentum of each line.
Regime: This is your most important environmental cue. It tells you the market's current state based on the DEVMA relationship. 🚀 EXPANSION (Green) signifies a bullish volatility regime where explosive moves are more likely. ⚛️ CONTRACTION (Purple) signifies a bearish volatility regime, where the market may be consolidating or entering a smoother trend.
Quality: Measures the strength of the last signal based on the magnitude of the DEVMA difference. An ELITE or STRONG signal indicates a high-conviction setup where the crossover had significant force.
PERFORMANCE
Win Rate & Trades: Displays the historical win rate of the strategy from the backtest, along with the total number of closed trades. This provides immediate feedback on the strategy's historical effectiveness on the current chart.
EXECUTION
Trade Qty: Shows your configured position size per trade.
Session: Indicates whether trading is currently OPEN (allowed) or CLOSED based on your session management settings.
POSITION
Position & PnL: Displays your current position (LONG, SHORT, or FLAT) and the real-time Profit or Loss of the open trade.
🧠 ADAPTIVE STATUS
Stop/Profit Mult: In this simplified version, these are placeholders. The primary adaptive feature currently modifies the time-based exit, which is reflected in how long trades are held on the chart.
🎨 THE VISUAL UNIVERSE: DECIPHERING MARKET GEOMETRY
The visuals are not mere decorations; they are geometric representations of the underlying mathematical concepts, designed to give you an intuitive feel for the market's state.
The Core Lines:
FastDEVMA (Green/Maroon Line): The primary signal line. Green when rising, indicating an increase in short-term volatility chaos. Maroon when falling.
SlowDEVMA (Aqua/Orange Line): The baseline. Aqua when rising, indicating a long-term increase in volatility chaos. Orange when falling.
🌊 Morphism Flow (Flowing Lines with Circles):
What it represents: This visualizes the momentum and strength of the fastDEVMA. The width and intensity of the "beam" are proportional to the signal strength.
Interpretation: A thick, steep, and vibrant flow indicates powerful, committed momentum in the current volatility regime. The floating '●' particles represent kinetic energy; more particles suggest stronger underlying force.
📐 Homotopy Paths (Layered Transparent Boxes):
What it represents: These layered boxes are centered between the two DEVMA lines. Their height is determined by the DEV value.
Interpretation: This visualizes the overall "volatility of volatility." Wider boxes indicate a chaotic, unpredictable market. Narrower boxes suggest a more stable, predictable environment.
🧠 Consciousness Field (The Grid):
What it represents: This grid provides a historical lookback at the DEV range.
Interpretation: It maps the recent "consciousness" or character of the market's volatility. A consistently wide grid suggests a prolonged period of chaos, while a narrowing grid can signal a transition to a more stable state.
📏 Functorial Levels (Projected Horizontal Lines):
What it represents: These lines extend from the current fastDEVMA and slowDEVMA values into the future.
Interpretation: Think of these as dynamic support and resistance levels for the volatility structure itself. A crossover becomes more significant if it breaks cleanly through a prior established level.
🌊 Flow Boxes (Spaced Out Boxes):
What it represents: These are compact visual footprints of the current regime, colored green for Expansion and red for Contraction.
Interpretation: They provide a quick, at-a-glance confirmation of the dominant volatility flow, reinforcing the background color.
Background Color:
This provides an immediate, unmistakable indication of the current volatility regime. Light Green for Expansion and Light Aqua/Blue for Contraction, allowing you to assess the market environment in a split second.
📊 BACKTESTING PERFORMANCE REVIEW & ANALYSIS
The following is a factual, transparent review of a backtest conducted using the strategy's default settings on a specific instrument and timeframe. This information is presented for educational purposes to demonstrate how the strategy's mechanics performed over a historical period. It is crucial to understand that these results are historical, apply only to the specific conditions of this test, and are not a guarantee or promise of future performance. Market conditions are dynamic and constantly change.
Test Parameters & Conditions
To ensure the backtest reflects a degree of real-world conditions, the following parameters were used. The goal is to provide a transparent baseline, not an over-optimized or unrealistic scenario.
Instrument: CME E-mini Nasdaq 100 Futures (NQ1!)
Timeframe: 5-Minute Chart
Backtesting Range: March 24, 2024, to July 09, 2024
Initial Capital: $100,000
Commission: $0.62 per contract (A realistic cost for futures trading).
Slippage: 3 ticks per trade (A conservative setting to account for potential price discrepancies between order placement and execution).
Trade Size: 1 contract per trade.
Performance Overview (Historical Data)
The test period generated 465 total trades , providing a statistically significant sample size for analysis, which is well above the recommended minimum of 100 trades for a strategy evaluation.
Profit Factor: The historical Profit Factor was 2.663 . This metric represents the gross profit divided by the gross loss. In this test, it indicates that for every dollar lost, $2.663 was gained.
Percent Profitable: Across all 465 trades, the strategy had a historical win rate of 84.09% . While a high figure, this is a historical artifact of this specific data set and settings, and should not be the sole basis for future expectations.
Risk & Trade Characteristics
Beyond the headline numbers, the following metrics provide deeper insight into the strategy's historical behavior.
Sortino Ratio (Downside Risk): The Sortino Ratio was 6.828 . Unlike the Sharpe Ratio, this metric only measures the volatility of negative returns. A higher value, such as this one, suggests that during this test period, the strategy was highly efficient at managing downside volatility and large losing trades relative to the profits it generated.
Average Trade Duration: A critical characteristic to understand is the strategy's holding period. With an average of only 2 bars per trade , this configuration operates as a very short-term, or scalping-style, system. Winning trades averaged 2 bars, while losing trades averaged 4 bars. This indicates the strategy's logic is designed to capture quick, high-probability moves and exit rapidly, either at a profit target or a stop loss.
Conclusion and Final Disclaimer
This backtest demonstrates one specific application of the VoVix+ framework. It highlights the strategy's behavior as a short-term system that, in this historical test on NQ1!, exhibited a high win rate and effective management of downside risk. Users are strongly encouraged to conduct their own backtests on different instruments, timeframes, and date ranges to understand how the strategy adapts to varying market structures. Past performance is not indicative of future results, and all trading involves significant risk.
🔧 THE DEVELOPMENT PHILOSOPHY: FROM VOLATILITY TO CLARITY
The journey to create VoVix+ began with a simple question: "What drives major market moves?" The answer is often not a change in price direction, but a fundamental shift in market volatility. Standard indicators are reactive to price. We wanted to create a system that was predictive of market state. VoVix+ was designed to go one level deeper—to analyze the behavior, character, and momentum of volatility itself.
The challenge was twofold. First, to create a robust mathematical model to quantify these abstract concepts. This led to the multi-layered analysis of ATR differentials and standard deviations. Second, to make this complex data intuitive and actionable. This drove the creation of the "Visual Universe," where abstract mathematical values are translated into geometric shapes, flows, and fields. The adaptive system was intentionally kept simple and transparent, focusing on a single, impactful parameter (time-based exits) to provide performance feedback without becoming an inscrutable "black box." The result is a tool that is both profoundly deep in its analysis and remarkably clear in its presentation.
⚠️ RISK DISCLAIMER AND BEST PRACTICES
VoVix+ is an advanced analytical tool, not a guarantee of future profits. All financial markets carry inherent risk. The backtesting results shown by the strategy are historical and do not guarantee future performance. This strategy incorporates realistic commission and slippage settings by default, but market conditions can vary. Always practice sound risk management, use position sizes appropriate for your account equity, and never risk more than you can afford to lose. It is recommended to use this strategy as part of a comprehensive trading plan. This was developed specifically for Futures
"The prevailing wisdom is that markets are always right. I take the opposite view. I assume that markets are always wrong. Even if my assumption is occasionally wrong, I use it as a working hypothesis."
— George Soros
— Dskyz, Trade with insight. Trade with anticipation.
Canuck Trading Trader StrategyCanuck Trading Trader Strategy
Overview
The Canuck Trading Trader Strategy is a high-performance, trend-following trading system designed for NASDAQ:TSLA on a 15-minute timeframe. Optimized for precision and profitability, this strategy leverages short-term price trends to capture consistent gains while maintaining robust risk management. Ideal for traders seeking an automated, data-driven approach to trading Tesla’s volatile market, it delivers strong returns with controlled drawdowns.
Key Features
Trend-Based Entries: Identifies short-term trends using a 2-candle lookback period and a minimum trend strength of 0.2%, ensuring responsive trade signals.
Risk Management: Includes a configurable 3.0% stop-loss to cap losses and a 2.0% take-profit to lock in gains, balancing risk and reward.
High Precision: Utilizes bar magnification for accurate backtesting, reflecting realistic trade execution with 1-tick slippage and 0.1 commission.
Clean Interface: No on-chart indicators, providing a distraction-free trading experience focused on performance.
Flexible Sizing: Allocates 10% of equity per trade with support for up to 2 simultaneous positions (pyramiding).
Performance Highlights
Backtested from March 1, 2024, to June 20, 2025, on NASDAQ:TSLA (15-minute timeframe) with $1,000,000 initial capital:
Net Profit: $2,279,888.08 (227.99%)
Win Rate: 52.94% (3,039 winning trades out of 5,741)
Profit Factor: 3.495
Max Drawdown: 2.20%
Average Winning Trade: $1,050.91 (0.55%)
Average Losing Trade: $338.20 (0.18%)
Sharpe Ratio: 2.468
Note: Past performance is not indicative of future results. Always validate with your own backtesting and forward testing.
Usage Instructions
Setup:
Apply the strategy to a NASDAQ:TSLA 15-minute chart.
Ensure your TradingView account supports bar magnification for accurate results.
Configuration:
Lookback Candles: Default is 2 (recommended).
Min Trend Strength: Set to 0.2% for optimal trade frequency.
Stop Loss: Default 3.0% to cap losses.
Take Profit: Default 2.0% to secure gains.
Order Size: 10% of equity per trade.
Pyramiding: Allows up to 2 orders.
Commission: Set to 0.1.
Slippage: Set to 1 tick.
Enable "Recalculate After Order is Filled" and "Recalculate on Every Tick" in backtest settings.
Backtesting:
Run backtests over March 1, 2024, to June 20, 2025, to verify performance.
Adjust stop-loss (e.g., 2.5%) or take-profit (e.g., 1–3%) to suit your risk tolerance.
Live Trading:
Use with a compatible broker or TradingView alerts for automated execution.
Monitor execution for slippage or latency, especially given the high trade frequency (5,741 trades).
Validate in a demo account before deploying with real capital.
Risk Disclosure
Trading involves significant risk and may result in losses exceeding your initial capital. The Canuck Trading Trader Strategy is provided for educational and informational purposes only. Users are responsible for their own trading decisions and should conduct thorough testing before using in live markets. The strategy’s high trade frequency requires reliable execution infrastructure to minimize slippage and latency.
MC Geopolitical Tension Events📌 Script Title: Geopolitical Tension Events
📖 Description:
This script highlights key geopolitical and military tension events from 1914 to 2024 that have historically impacted global markets.
It automatically plots vertical dashed lines and labels on the chart at the time of each major event. This allows traders and analysts to visually assess how markets have responded to global crises, wars, and significant political instability over time.
🧠 Use Cases:
Historical backtesting: Understand how market responded to past geopolitical shocks.
Contextual analysis: Add macro context to technical setups.
🗓️ List of Geopolitical Tension Events in the Script
Date Event Title Description
1914-07-28 WWI Begins Outbreak of World War I following the assassination of Archduke Franz Ferdinand.
1929-10-24 Wall Street Crash Black Thursday, the start of the 1929 stock market crash.
1939-09-01 WWII Begins Germany invades Poland, starting World War II.
1941-12-07 Pearl Harbor Japanese attack on Pearl Harbor; U.S. enters WWII.
1945-08-06 Hiroshima Bombing First atomic bomb dropped on Hiroshima by the U.S.
1950-06-25 Korean War Begins North Korea invades South Korea.
1962-10-16 Cuban Missile Crisis 13-day standoff between the U.S. and USSR over missiles in Cuba.
1973-10-06 Yom Kippur War Egypt and Syria launch surprise attack on Israel.
1979-11-04 Iran Hostage Crisis U.S. Embassy in Tehran seized; 52 hostages taken.
1990-08-02 Gulf War Begins Iraq invades Kuwait, triggering U.S. intervention.
2001-09-11 9/11 Attacks Coordinated terrorist attacks on the U.S.
2003-03-20 Iraq War Begins U.S.-led invasion of Iraq to remove Saddam Hussein.
2008-09-15 Lehman Collapse Bankruptcy of Lehman Brothers; peak of global financial crisis.
2014-03-01 Crimea Crisis Russia annexes Crimea from Ukraine.
2020-01-03 Soleimani Strike U.S. drone strike kills Iranian General Qasem Soleimani.
2022-02-24 Ukraine Invasion Russia launches full-scale invasion of Ukraine.
2023-10-07 Hamas-Israel War Hamas launches attack on Israel, sparking war in Gaza.
2024-01-12 Red Sea Crisis Houthis attack ships in Red Sea, prompting Western naval response.
Yearly History Calendar-Aligned Price up to 10 Years)Overview
This indicator helps traders compare historical price patterns from the past 10 calendar years with the current price action. It overlays translucent lines (polylines) for each year’s price data on the same calendar dates, providing a visual reference for recurring trends. A dynamic table at the top of the chart summarizes the active years, their price sources, and history retention settings.
Key Features
Historical Projections
Displays price data from the last 10 years (e.g., January 5, 2023 vs. January 5, 2024).
Price Source Selection
Choose from Open, Low, High, Close, or HL2 ((High + Low)/2) for historical alignment.
The selected source is shown in the legend table.
Bulk Control Toggles
Show All Years : Display all 10 years simultaneously.
Keep History for All : Preserve historical lines on year transitions.
Hide History for All : Automatically delete old lines to update with current data.
Individual Year Settings
Toggle visibility for each year (-1 to -10) independently.
Customize color and line width for each year.
Control whether to keep or delete historical lines for specific years.
Visual Alignment Aids
Vertical lines mark yearly transitions for reference.
Polylines are semi-transparent for clarity.
Dynamic Legend Table
Shows active years, their price sources, and history status (On/Off).
Updates automatically when settings change.
How to Use
Configure Settings
Projection Years : Select how many years to display (1–10).
Price Source : Choose Open, Low, High, Close, or HL2 for historical alignment.
History Precision : Set granularity (Daily, 60m, or 15m).
Daily (D) is recommended for long-term analysis (covers 10 years).
60m/15m provides finer precision but may only cover 1–3 years due to data limits.
Adjust Visibility & History
Show Year -X : Enable/disable specific years for comparison.
Keep History for Year -X : Choose whether to retain historical lines or delete them on new year transitions.
Bulk Controls
Show All Years : Display all 10 years at once (overrides individual toggles).
Keep History for All / Hide History for All : Globally enable/disable history retention for all years.
Customize Appearance
Line Width : Adjust polyline thickness for better visibility.
Colors : Assign unique colors to each year for easy identification.
Interpret the Legend Table
The table shows:
Year : Label (e.g., "Year -1").
Source : The selected price type (e.g., "Close", "HL2").
Keep History : Indicates whether lines are preserved (On) or deleted (Off).
Tips for Optimal Use
Use Daily Timeframes for Long-Term Analysis :
Daily (1D) allows 10+ years of data. Smaller timeframes (60m/15m) may have limited historical coverage.
Compare Recurring Patterns :
Look for overlaps between historical polylines and current price to identify potential support/resistance levels.
Customize Colors & Widths :
Use contrasting colors for years you want to highlight. Adjust line widths to avoid clutter.
Leverage Global Toggles :
Enable Show All Years for a quick overview. Use Keep History for All to maintain continuity across transitions.
Example Workflow
Set Up :
Select Projection Years = 5.
Choose Price Source = Close.
Set History Precision = 1D for long-term data.
Customize :
Enable Show Year -1 to Show Year -5.
Assign distinct colors to each year.
Disable Keep History for All to ensure lines update on year transitions.
Analyze :
Observe how the 2023 close prices align with 2024’s price action.
Use vertical lines to identify yearly boundaries.
Common Questions
Why are some years missing?
Ensure the chart has sufficient historical data (e.g., daily charts cover 10 years, 60m/15m may only cover 1–3 years).
How do I update the data?
Adjust the Price Source or toggle years/history settings. The legend table updates automatically.