i totally understand what youre saying, and couldnt agree more it!. im all for leaving things as open, core and bare boned as possible - w/o any frilly bits that supposedly make tasks easier (which can end up running counter-productive, like you mentioned firstly). im _not_ saying they should write tools that magically handle the infinitudes of obscure situations we dream up for their use, acting just as we dreamnt they should.
im just questioning the basic dev pipeline, and how they test their tools before they ship. im talking about developing a tool on a suite of _generic examples_ that it is most likely to encounter (not every human contingency!, but foreseeables such as this one), and ensuring the tool acts consistently, as expected (not by somebody just sitting down and going "work!", but by a user who has spent enough time with tool to have reasonable expectations of its behaviour). i think this is basal criteria for a good tool. in the event your developing tool diverges from this, you either rethink a more robust implementation, or if the implementation is sound but for a few quirks (nothing wrong with that, special cases are in the nature of maths), thats fine, _but you point it out to the user_ (you dont even have to explain why, but at least note it!!).
playing w soft and max, they appear to go for the 'animated sweep' approach - so they get that bulging the way you thought maya would. im having no problem w periodicity being preserved here (4.5).
im glad you ask for the file, its good to make sure we're talking about the same thing (or at least head closer to!). interested to hear what you figure out, as always!
be nice to sort out whats really going on. anybody else got some ideas?
cheers.