So at the end of last month, we took our game to the Utah Indie game night, which, this time was hosted at UVU. We got a lot of good feedback. The most common critique was the camera. (The second most common was the difficulty ramp) People were getting unnecessarily disoriented and obstructed and such.
Since then, I’ve been doing quite a bit of work on those issues. First of all, I’ve put restrictions on the camera so the horizon is always in view, which is the biggest visual cue as to where exactly “down” is. I think that, combined with re-doing the world 1 background should make that a whole lot clearer.
Regarding obstructions, I wasn’t quite sure what to do about it because, as I had explained to other attendees, the shaders I used didn’t support transparency. I can’t believe it didn’t occur to me sooner, but the obvious solution is to change the material… duh. So the end algorithm is something like this: We have a camera, a ball, a list of track materials and a corresponding list of transparent track materials. first, do 3 raycasts from the camera to different parts of the ball (we want a certain amount of overlap before transparency occurs), then filter all the targets hit to only keep the common hits, then dump any hits that don’t use certain materials. (mostly to filter out the rails.. they caused flashing if the camera just went past them for a moment) change the material on any objects that were hit this frame, but not last frame, then add to a list of objects that haven’t been hit, but are still transparent. Then decrease the alpha on the appropriate objects, increase it on the others, swap fully faded in materials with the originals, then update the lists, and rinse and repeat.
Of course it took a while to get there and there were certainly some weird things along the way, such as the fact that querying Renderer.sharedMaterials[] returns a copy of the array, not the original. Needless to say, that caused some head-scratchin’ problems.