Hugh wrote:That's amazing Quickfur!
How much on the cutting edge (literally as well as figuratively I guess lol) are you with this rendering program of yours?
Are you breaking new ground that no other program has done before elsewhere in the world?
I don't know, actually. But as far as I'm aware, I'm the only one who combined the idea of perspective projection + hidden surface removal (culling) + rendering transparent ridges. The other programs that I'm aware of like Stella4D don't do full perspective projection (by that I mean you can't place your camera at an arbitrary point in 4D and project the polytope based on that; IIRC Stella4D only does projection on the surface of a cell). I don't blame them for not doing it either, since for projection-at-a-distance to make any sense at all, you need transparent ridges, but, as I'm finding out, even that in itself is not good enough. Due to the complexity of the 3D images, even transparent ridges fail to convey the full structure of the image after you have more than 2-3 layers of cells in your line-of-sight. So eventually you have to have a way of saying "highlight these groups of cells over here" or "highlight those cells over there with hexagonal faces" or "color these cells red and those cells blue", on a case-by-case basis. Which means that you need some kind of query language to refer to all these cells, and also a scripting language to be able to assign complex set of colors, visibilities, etc., in a maintainable way (i.e. you don't want to have to manually reassign colors to the same 600 cells every time you start a new session, or go back and recolor every 600 cells in 20 rendering setups just because you decided to color that one group of cells red instead of green---100% reproducibility is a big gain here).
Currently my program has a simple but quite expressive query language for you to address any element in a polytope based on their relative locations to the camera and to each other. (My opinion is that a query language that requires the user to enter 4D coordinates is too difficult to use, and besides, the whole point of the program is to help you learn
4D, not require prior knowledge of it.) Although I'm proud of having thought of the idea of a query language for polytopes, I'm not that proud of the current query language my program understands -- it is quite limited in many respects, and also inconsistent in some places. The scripting language is OK, but could've been designed better. It's also not as powerful in many respects as I would like; for example, it'd be nice if it had a built-in graph-coloring solver so that I don't have to manually 3-color an icosidodecahedron or 4-color a dodecahedron every single time I do a set of renders on a 120-cell family polytope, for example. (Yes there are things that are still not automated. At least it's better than it used to be. I used to have to manually discover the cell connectivity graph by using the limited query language to find neighbouring cells, and then draw out the graph on paper and manually assign colors to them, then copy the cell numbers back into the script... yeah, who knows what made me endure such an insanely boring task. Nowadays the program at least can automatically compute the connectivity graph of a selected set of polytope elements and output a graph definition that I can feed to a graph visualization tool like Graphviz, so that I can color the thing and then use a shell script hack to collect the cell numbers back together... ugh, there's still so much improvement to be done here.)
When you say that you "could've just downloaded precomputed coordinates off the 'Net somewhere" does that mean that this has already been done in great detail by other programs and supercomputers?
Well, the coordinates for these polytopes are either already known by the researchers who first discovered them, or can be computed by various means, such as by iteratively closing the polytope based on known values in its vertex figure. If I were lazy, I could've just used these existing resources to get the coordinates I needed.
However, being the stubborn perfectionist that I am, I was unhappy with these existing coordinates because (1) many sources of full coordinates (i.e. files containing the entire list of coordinates) were computed using low-precision floating point, so you'd get only about 6-10 digits of mantissa at the most --- very bad for Povray because Povray is very picky and if some polygons don't line up to within a tiny margin of error, it will either complain loudly or produce noticeable artifacts in the output. (2) Computing coordinates based on vertex figures suffers from the same problem, in that roundoff error accumulates after you've iterated a large number of times --- but with things like the 120-cell family polytopes, you can't help but iterate a large number of times, so you get rather poor quality coordinates from that. (3) Besides, such coordinatese have poor orientation -- some sets of coordinates that I've looked at have the polytope in some unknown, possibly random orientation, which is a pain to reorient manually so that the faces line up properly, say with the coordinate axes or at least have some kind of symmetrical orientation. (4) I don't know how much confidence one can have that the coordinates are accurate, because if you only have 6-10 digits of mantissa, plus accumulated roundoff errors if you use the vertex figure method, how sure are you that the coordinates are actually anything close to the real thing? Plus, if a particular computation turns out to not work because of roundoff error, you have no way of fixing the problem if your source coordinates were low-precision to begin with. (5) How do you know that the vertex figure coordinates you downloaded are accurate, if you don't know how it's derived, and it comes with only 6-10 digits of precision? What if you need a higher precision, then what?
So the only way of being sure about coordinates is either (1) store them in algebraic form, so that you can compute them to any precision you want for your needs, or (2) write algorithms for computing them, so that, at least in theory, if you needed higher precision you just compute more accurate starting coordinates and then run the algorithms again (and preferably the starting coordinates would have a known algebraic form that you can use to derive as many digits of precision as you need).
Also, recomputing these things from scratch also has the advantage of testing your knowledge: do you REALLY know these 4D polytopes enough to be able to compute their coordinates? If you write an algorithm and compute some coordinates, and then you find out that the coordinates produce some strange, irregular polytope with irregular facets, then, well, you've more to learn.
But if you get expected results that match up with published data on the same polytope, then you can have the confidence that you have the proper understanding of it.
Another thing about recomputing these things is... finding nice
coordinates that you can read without your eyes glazing over pages upon pages of seemingly arbitrarily-complex algebraic expressions. For example, the coordinates of the 120-cell that nowadays everybody uses was actually a late derivation, based on the discovery that the 600-cell can be derived from the snub 24-cell so that you have relatively nice-looking coordinates. The original
120-cell coordinates... well, I haven't seen them myself, but can you imagine? I mean, the coordinates we have now
already look complicated enough, I'm not sure I want to see the original coordinates. Probably full of square roots sprinkled everywhere and filling up several pages with no discernible pattern to them. Well, OK, I'm not saying it isn't full of square roots everywhere now, but at least they have a very direct connection with the Golden Ratio, which we know is a staple of pentagonal things, instead of complicated 5-level-nested 25-term expressions that give you a headache just looking at them.
It was due to my stubbornness in not just giving in to downloading coordinates that I discovered a truly marvellous way of getting extremely nice coordinates for the hypercube truncates in any dimension, just by reading off the Coxeter-Dynkin symbol. It's so simple, in fact, that I can rattle off coordinates off the top of my head, just by looking at the CD diagram. I'm certain that coordinates for the hypercubic truncates were already known a long time ago, but, as far as I can tell, their direct connection with the CD diagram wasn't known, or at least was never published.
I also discovered a truly amazing pattern to the coordinates of the n-simplex, which is connected with the Greeks' triangular numbers of old. (Yeah, those Greeks were on to something, I tell ya!) It gives less-than-pleasant coordinates for the tetrahedron, but the tetrahedron is an exception in the sense that it just happens to be the alternation of the cube in 3D, but in N dimensions, this isn't true (including 2D, by the way). But not only so, I discovered a direct connection between the n-simplex truncates and the (n+1)-hypercubic truncates, and since the hypercubic truncates have coordinates that can be directly read off the CD symbol, I can also get the n-simplex truncates coordinates directly -- except in (n+1)-space instead of n-space. But using a particular kind of projection back into n dimensions, I found that it produces simplex truncate coordinates that have a very nice orientation (for example, they are all upright, instead of lying along some strange arbitrary line!) which also happens to have nice symmetries around each coordinate axis.
And just very recently, while trying to find a way to compute "nice" coordinates for the omnitruncated 120-cell (well, more like, trying to find an algorithm to compute it from the 120-cell that ensures uniformity), I kept getting irregular polytopes or coordinates that give the convex hull algorithm (which I didn't write, BTW, it's Komei Fukuda's cddlib code) such a hard time that it produced inconsistent results---until I discovered that the reason is related to how the ringed nodes in the CD diagram correspond with the face-expansion operation. Which then led me directly to understand just why calling the great rhombicuboctahedron the "truncated cuboctahedron" is a complete misnomer, and what the proper derivations are.
But anyways. I'm rambling again, but the point is that this is really more for my own learning than anything else; I could've chosen the easy way out but then I would've missed a lot of the insights that I picked up by attempting to solve the problem myself. I've learned all sorts of stuff along the way, such as some number theory, some interesting properties of algebraic numbers, field theory, group theory, graph theory, etc.. In fact, I've discovered (well, probably re
discovered) a way of manipulating a certain class of irrational numbers using only integer arithmetic and constant storage space per number (i.e., not a full-fledged expression tree). Even though most people would laugh at me for "wasting" my time reinventing the wheel (recomputing known coordinates), I think I got a lot more out of it than if I had chosen to just use what was already there.
And besides, I have this suspicion that some of the coordinates I have right now have never been computed before... recently I wrote a program that can recognize certain irrational numbers automatically, and I've discovered some interesting things about the coordinates of the 120-cell polytopes. I may
(and I emphasize may
---it's only a remote possibility) be the first one to have found closed algebraic forms of the coordinates for some of these polytopes. Armed with this combo of polytope dissector and irrational number recognizer, I've been able to get algebraic forms of coordinates where it would've been way too hard to compute by hand (and probably take way too long to compute in software like Matlab---with my approach, you compute the thing in floating-point and then turn it back into algebraic form, as opposed to manipulating algebraic forms directly which is very inefficient). In fact, with this number recognizer I can even take coordinates that only have 6 digits of precision and turn them back into their full algebraic form, and then recompute them with higher precision! It's like the ultimate antidote against roundoff error: I don't care
about roundoff error because the number recognizer automatically "accuratizes" them after the fact. Buahahahaha.... oh, the irony... and I was the one who complained about existing coordinates being too inaccurate. Hahaha.
OK, I'll stop now. I talk too much.