Well you would use a particle, particle expression node and instancer. That would cut your need down to about 6-8 nodes. Evaluation time somewhere around 0.001-0.01 seconds per frame (if you know how to write a expression, if you use setAttr then yes would slow down to about a 30 second update)
That seems like it would be incredible heavy on the scene.
Lol, i don't expect you to see the irony in this. Why is it that people assume a for loop is faster than 3000 nodes + 6000 connections?! Connections are as fast as computers get, they just represent the data structure that the computer runs. Now having a 1000 nodes does indeed eat a bit of memory but that's about it, still infinitesimal in size compared to size of textures and medium meshes. (ok but still not saying you should do this, you also need to choose the right nodes!) To get faster than connection you would have to write a node that somehow knows how to evaluate the connection graph faster inside itself, not a minor feat by the way, for 0-10 times performance at 10-100 times more work*. See connections also OPTIMIZE maya on the fly to not update what does not need to update.
having 3000 utility nodes may be a bit impractical tough, thats why you use particle nodes to drive stuff. At the end of the day you have no choice, EVERYTHING in maya is just nodes and attributes. And indeed setAttr is about 100 times slower than a connection.
Now how fast are nodes? Well since calculating weight is pretty much a +- Avarage node coupled with 2 multiply divides. Assuming you dont use a constraint node that does the same in one node (but more processing). this updates considerably faster than enough. Now each node takes under 0.0000 to update which is way under the resolution of dgtimer . And with the additional burden of updating the timer attributes it takes about 0.05 seconds to update the computation of the calculation. Testing this by connecting a sphere att the end shows that its not really all thet slow, i can't even notice the slowdown of maya. Connecting a sphere in each second plus minus average (for 100 nodes) and the playback is still at more then 30 fps.
Everything unsder 10.000 on O(n) is workable
- or a hundred times afaster for 3000 time the work, anyway youd STILL use connections, no way around this.
** transforms are slow thats why you sue instancer. But if you want 1000 transforms then no amount of other work actually changes the overhead of that!
! ater all a loop that does theese clulations is equally long when winded out, tahts what a computer does, it doent care about how LONG the code is to you but how long it is winded out.