Hello everyone,
Today I published a new UI for Blender for our beloved Bodynodes sensors, this is link to the GitHub repository:
Along with the repo we also made a few Youtube videos explaining and showing what can be accomplished:
Blender UI Introduction - 1: https://www.youtube.com/watch?v=stgBOEd9ngc
Blender animation #1: https://www.youtube.com/watch?v=MwjpmM8pkQM
Blender animation #2: https://www.youtube.com/watch?v=hXeTYtePf1c
So, what is this all about? After having built the sensors I wanted to use them somehow, and I decided to concetrate on animations.
Many years ago I wanted to create some cool fighting animations on Blender, and I remember that just to animate few seconds it would take hours. The reason is simple, a second is generally composed of 25/30 frames, meaning that 4 seconds (let's say) is already at least 100 frames to animate. Consider that a normal human character is composed by at least 11 bodyparts to move, and all bodyparts have to change in each frame. You could jump frames and try to give a drafted animation, but the results is usually very disappointing. In fact, the only way to get good results is by changing each frame. And to make things more difficult and time consuming, you'll get a sense of the movement after 5-10 frames, meaning that you have to get back a lot of times.
Having in mind all of this hurdles I thought to myself, what if I could just track my movement and use that!
This is were our Bodynodes sensors come into place. They get the movement directly from the real world and they do it fast. The captured movement is in fact always realistic. But there are obviously other problems popping up, and that's what the UI I made helps with.
I am not expecting for the UI to be broadly used any time soon, because the only way to be able to use it is to have the physical sensors. Right now, I am probably the only person having them. This is not a problem for me, because I believe that hobbyists will eventually making Bodynodes sensors or eventually buy them from me if they prove "working great". So for me it is all about making them work great in taking animations for now. I will make my own animations and maybe sell something in the Unity Store, Blender armature movements can be easily translated in Unity resources.
You there it is, that's the directions the project is going in 2020: let's animate.
At this point you might got convinced that this is sort of interesting and your are asking yourself: what is the UI? Here is a picture:
I developed differents sets of functionalities to deal with data coming from outside. This is not something Blender is design for, but I managed to develop around how the tool works. For example, data comes asynchronously (meaning at any time, like a sort of interrupt) and Blender is basically single-threaded (meaning it has to read data at a particular point in the program), you can solve this with an intermediate data structure containing the async data coming from the outside and then letting Blender read from there when it wants.
The data is supposed to come from the sensors connected via Wifi to the local hotpot of the your PC. A local server in the PC is required to receive the data, and that what the Start/Stop buttons of the server are for. They let you start and stop a local server.
There is a button "Select player" let you decide to which character in your animation you want the movement data interact with. Pretty natural functionality and any stardard project with multiple players interacting with each other.
The whole animation section is full of different functionalities and helper to collect the data and deal with it. You can collect up to 3 different set of movement sessions (Take1, Take2, and Take3) and select the one you prefer.
But it is almost impossible the get the perfect take, and that's why you might want to change the animation ("Start Change" and "End Change"). Specifically these will let you catch the movement differences you applied on the bodyparts, and apply them automatically to all the next frames.
And as a cherry on top of all this, the "Apply Walk Ref" and "Apply Walk Auto" buttons recreate the movement of the full character in the XY-plane by using the movements in the feet. The idea is to negate any movement in the reference bone by apply the same change but negated on the main bone of the character, making it move all together.
"Apply Walk Ref" uses the bone you selected as reference point, while "Apply Walk Auto" tries to figure it out by itself what is the reference foot at any point.
Then we have the Tracking section, it is possible to Disable/Enable the tracking. It temporarely disconnects/connects the data to the character. It is very helpfull when you are collecting the data and you have your sensors on you. You want to have the possiblity to check your animations without having the sensor moving the character while you are seated in front of your PC, in fact the character would seat with you. So after you take a reconding, you disable the tracking and check the resulting animation.
There are a few buttons to Save/Load/Reset the puppet orientation. Very usefull if you have to start from a specific position of the character. You pose the character and then "Save" the puppet orientation. After you have recorded and moved the character, you can get back to the initial position by clicking "Load". "Reset" set the character to the typicial T position.
Axis settings are very useful if you develop your own bodynodes and you don't know the orientations. They will probably different than the ones I setup in the scrpts for my sensors. With this functionality you can reassign the W,X,Y,Z quaternion values coming from the sensors to the W,X,Y,Z quaternion orientation of each bone. You have to then change the script accordingly or you have to go through the process every time you start the script.
Last but not least there is the "Close"button. It simply closes the UI and cleans all the local data. It also stops the animations and the server (in case they are running). It is very useful if you are a developer like me and want to change the script. You run it, find the problem, correct the problem, close the UI, and the run again.
That's all! I hope you like what I did and maybe get involved somehow. Thanks for getting here, it is very much appreciated
See you all, Manuel
Comments