ac2000 wrote:quickfur wrote:But yes, I do use povray. I have a tool for computing coordinates, which generates a 4D definition file that contains all the information about the vertices, edges, ridges (2D faces), and cells, which are then fed to another program that does the computations for projecting the object into 3D based on a given viewpoint. This projector program also lets me assign textures to various elements of the polytopes, to highlight/hide certain parts of the object for clarity's sake, etc.. Once the projection is done, it writes the 3D model of the result into a povray scene file that can be raytraced from any camera angle (specified externally by a .pov template file).
Aha, OK, so even three programs are needed for these images. I actually thought it would be easier, because POV-Ray is usually so versatile when dealing with all kinds of formulas and functions (at least from what I've seen of the more experienced POV-ers.)
Povray is indeed very versatile in dealing with all kinds of math stuff. I'm not that well-versed with its more advanced features though; I use it mainly because it saves me the hassle of writing my own 3D->2D projector. I'm sure more experienced POV-ers would take a different approach; in fact, I know some on this forum have directly coded 4D->3D projections within povray itself.
I chose to write separate programs for it because povray vectors are limited to 5D (or was it 6D?), but I wanted a general approach that would work with objects of any dimension. Furthermore, I wanted to deal with things like hidden surface culling, free 4D camera placement/orientation, etc., all of which are no doubt possible in povray but slightly inconvenient due to the assumption of a large part of its vector-manipulating functions that one is dealing with 3D vectors. In retrospect, the vector size limitation isn't really that big of a problem, because one could use arrays to represent larger vectors. But one consideration still scores against it, that is, performance. A C++ program specifically crafted to do what I need is significantly faster than using the povray input language, which is interpreted. Performance is an important consideration when dealing with higher-dimensional objects, because of combinatorial explosion: an icosahedron, for example, has only 60-odd elements (12 vertices, 30 edges, 20 faces), but its 4D counterpart, the 600-cell, has 120 vertices, 7200 edges, 1200 faces, and 600 cells. When you go up to even higher dimensions, the number of elements grow exponentially: an n-dimensional cube has 2n vertices alone, not counting its other elements which also increase very quickly. With such numerous elements, maximal performance is called for, lest one has to endure unnecessarily long waits.
[...] I'm glad you like the idea. Actually I made a mistake above, when describing the idea "in a way that all the voxels of a 3d retina are filling the whole field of view". I rather meant "part of the voxels/voxel array of a 3d retina are filling the whole field of view". I don't know exactly which part that could possibly be (maybe half of the voxel array for representing something like a 180 degree 2d viewing angle). I guess to put the whole voxel array into one image would look too messy.
But I was thinking of placing the camera inside, yes. Maybe experimenting with some fish_eye or ultra_wide_angle camera or something like that. And maybe some colour cues (i.e. different colour for volume with difference light shading for distance within the volume, and different colour for the edges and still different colour (or maybe edge colour but darker) for vertices.
Hmm. I shall have to think more carefully about this. I'm unfamiliar with fish_eye or ultra_wide_angle, but I suppose they are just some ways in which one can render a larger panorama of one's surroundings from a single viewpoint. As for shading, I shall have to consider how best to do it, because obviously the volumes cannot be rendered opaquely; one would perhaps want some kind of fog effect to give a sense of depth when looking into the volume, perhaps? I'm unsure about what works best at this point. Edges and faces I will leave as-is, since our brains do still use them as guideposts in inferring the shapes of 3D volumes.