I’ve always admired James Bond’s knack for wriggling out of impossible situations. I said to him:
“It’s quite remarkable how smoothly you scale fences, leap from windows, and bulldoze through walls. I find myself wishing I could be a bit more like you…”
James’ response, however, was less than encouraging.
“You wish to experience claustrophobia?” he retorted, arching an eyebrow.
“You mean to tell me you’re uncomfortable in a cramped cage with tied hands?”
“Exactly,” He admitted, nodding. “And in an elevator cabin with closed doors too. I’ve never been a fan of those maze attractions either. Whenever faced with a labyrinth, I simply opt for a simple ladder and make my escape into the third dimension.”
“But what if your enemies manage to trap you in a cell with a closed roof and floor?” I countered.
“Same strategy,” replied he confidently. “I’ll find my way out through an extra dimension.”
“But we live in a three-dimensional space, you know.”
“Are you sure?” James smirked, shaking his head.
Are we living in 3 dimensions?
Bond was onto something. The true dimensionality of our world remains a mystery. It seems our brains have settled on encoding it as a three-dimensional space, but this choice is purely pragmatic. Throughout our evolutionary history, activities in other dimensions didn’t offer any survival advantages, so our brains streamlined their processing to focus on the three dimensions most relevant to our daily lives.
It’s likely that our neural networks somehow compress the external reality to achieve this reduction in dimensionality. And no, I’m not referring to the time dimension introduced most notably by Albert Einstein (our brains haven’t quite grasped that one yet, by the way). There could be other dimensions lurking beyond our perception, but since they don’t offer practical utility in the vast majority of cases, we remain oblivious to them.
So, just like Principal Component Analysis (PCA) condenses multi-dimensional data by selecting a few major components and discarding the rest, our minds perform a similar feat. By truncating the world to three dimensions, we simplify the cognitive load, making it easier to navigate and comprehend. It’s a fascinating parallel: our mental autoencoder and PCA both strive to reduce complexity, enabling us to operate more efficiently within our dimensional framework.
Best way to truncate the PCA dimensions
This truncation process lies at the heart of PCA. To illustrate that, I constructed a synthetic map featuring three compounds, each emitting distinct spectroscopic peaks. Adding Gaussian noise for realism (Poisson noise would have been more appropriate, but I opted for simplicity), I created a scenario where each pixel of the map could potentially emit a continuous or discrete signal across 1000 energy channels. This translates to a staggering 1000 dimensions in our data space, making navigation cumbersome.
Enter PCA. By extracting, for instance, the ten most significant directions (principal components) from this 1000-dimensional space, we can squeeze data points into the more manageable volume. Look at the two-dimensional projection of this volume, specifically at the plane formed by the first and second principal components. You see a clear delineation of the data points corresponding to compounds A, B, and C, allowing seamless navigation among them.
However, projecting onto the (second plus third) principal components plane reveals a lack of meaningful variation along the third axis, indicating a mere Gaussian spread of noise. Similarly, projecting onto the (third plus fourth) plane yields a two-dimensional Gaussian distribution devoid of material information, serving only to quantify noise levels.
Thus, what initially seemed like a daunting 1000-dimensional dataset reveals itself to be effectively two-dimensional. Even the ten principal components we initially extracted appear excessive.
Still, accurate determining the dataset’s true dimensionality requires a more nuanced approach. Let’s try to estimate a kind of anisotropy of these two-dimensional projections. Say, to measure how differently data are distributed along the randomly chosen directions. Such anisotropy parameter should be zero for directionally uniform distributions and non-zero for anisotropic ones. The possible Python implementation can be found in the pdf version of this document: Full Text with Codes.
A straightforward computational solution emerges: identify and retain principal components couples exhibiting anisotropy, while discarding those that align isotropically. My approach is, of course, not the only one. You might look at the following alternatives:
https://tminka.github.io/papers/pca/
https://arxiv.org/abs/1311.0851
Yet, as James Bond remarked, “It doesn’t matter who and how, what matters is the mission is performed”.
The Python codes can be found in the pdf version of this document: Full Text with Codes.
Leave a Reply