Devin Coldewey is a Seattle-based writer and photographer. He wrote for the TechCrunch network since 2007. Some posts, it would like you to read: the perils of externalization of knowledge | Generation I | Surveillant society | Select two | Frame war | User manifesto | Our great sin his personal site — coldewey. cc. ? Read More
Researchers at Cornell was hard at work on a project that sounds strange at first, but is actually a very natural extension of the existing 3D and technology. They are the engine for the production of sounds colliding objects, simulating materials are themselves objects in virtual space, and then calculate the forces and vibrations, which would be produced. Academically this complex, but it has many practical applications.
Simulation of noise may be the easiest way to apply 3D games that, despite almost photorealistic models, textures, and lighting, still rely on a limited cache of pre-recorded sounds to play when, say, the tables turned over. Imitation of each object on the table and tracing the physical consequences of collisions with other objects, and the resulting consequences, more realistic and accurate sound, you can create on the fly — or at least this theory.
Now researchers recognize two obstacles. First the physical world is much easier in some cases, in order to ensure the effective amount of data. The ball hitting the floor is one thing, with only a few factors to calculate, but what about the stack the dishes rattling against each other at the table, which was jostled? You must reduce the number of points of contact, so thousands or millions of different interactions must be monitored separately. At the same time, they should be enough to create a realistic sound. This balancing is determined by the amount and type of objects and processing power, they have at hand.
And it seems that not everyone can be created completely from scratch just yet. Their demo at SIGGRAPH have a stack of dishes mentioned above, but apparently not flame soundtracking so easy. Part of the low-frequency they received, but for the rest had to base their models based on sounds recorded and then "paint" them at the bottom. That said, the most frequent sounds are predictable in the same way physical interactions are unpredictable (the time that they themselves are the amount of physical reactions) and just get the tools to do so.
Parallel processing hardware (such as graphics cards or many multi-core CPUs) will need to make these calculations in real time, though: imitating the noise of the light hours only for the short clip. But the very idea of convincing anyone who heard "breaking glass", or "Ricochet" noises in the game or even movies, where a limited catalogue of sounds.
Now it is still in the labs, but it's definitely the kind of thing that gets turned into a product and sold. Companies like Nvidia and Havok would love to get their hands on it. Unfortunately there aren't any videos, but if one becomes available once shown at SIGGRAPH, we'll put it here.
No comments:
Post a Comment