Dimensionality problem arises when you try to analyze a large number of assets (variables) but don't have enough historical observations (data points). Specifically:

To calculate a covariance matrix (which measures how assets move together), you need more observations than variables.

If this condition isn’t met, some portfolios will incorrectly appear to have zero risk (volatility). That’s because the math forces the model to "fit" the data too tightly, creating spurious (false) results.

This often happens with high-dimensional data, such as analyzing hundreds of stocks with only a few years of monthly returns.

Solution: Use Factor Models
Factor models simplify the data structure by assuming:

Asset returns are mostly influenced by a few common factors (like market return, interest rates, or inflation),

Plus a unique, uncorrelated error term (specific noise) for each asset.

Mathematically, instead of needing to estimate a full covariance matrix of size N × N, you only need to estimate:

The covariances between each asset and the factors, and

The variances of the asset-specific terms.

This reduces the number of required observations and produces a more stable and realistic covariance matrix, especially when data is limited.

In short: Factor models reduce complexity and make risk estimates more reliable when dealing with many assets and limited data.