Howdy folks,
thanks for some answers to previous python questions. Now I have a question about execution speed between python and the MB plug-in API.
I've got a model constrained by optical data; I am NOT using the MB actor->character setup. Regardless, once the model is constrained the skeleton moves about properly as its being driven by the mocap data. At that point we plot the rotations and translations onto the bones, save out the skeleton, and convert it to a proprietary format in Maya.
I'm writing a script that will make this process self-contained with MB. What I've noticed, however, is a tremendous amount of processing time when I try to read the data into memory after I've plotted it onto the bones. I've set up a data structure that is a list of lists where each individual list contains the rotational data for a bone at a particular frame, e.g.
[
[ Root(Tx,Ty,Tz).frame1, Root(Tx,Ty,Tz).frame2, ... ]
[ LeftLeg(Rx,Ry,Rz).frame1, LeftLeg(Rx, Ry, Rz).frame2, ...]
...
[ LeftHand(Rx,Rz,Rz).frame1, LeftHand(Rx,Ry,Rz).frame2, ... ]
]
Make sense so far?
After plotting the data onto the bones, I run a piece of the script that recursively calls itself to plow through the hierarchy and store the data. It runs just fine but the execution time is roughly 1 second per 100 frames. The problem is it won't scale well if I load 15 takes that are 9 seconds each.
So the question is this: if I wrote a plug-in using the C++ based API to do the same thing would it run significantly faster? I'm assuming it would since this would be compiled C++ code and not a script getting fed through the Python interpreter, but the cost savings has got to be quite significant!
Any thoughts?
Thanks!