- - - SITE IN PROGRESS - - -
Wednesday January 24th
Main breifing for todays lesson:
This assignment focused on developing skills within Adobe After Effects and Adobe Illustrator to help gain knowledge and skills to work towards and build animated visuals for a music piece. The music was to be created by the students on the music course here at Ravensbourne University. I'm very excited about this project, as I have much experience with sound design already
Wednesday January 31st
our test for todays lesson was to animate a cube whether on flat surface, or on uneven ground, to see and test our understanding on ease in/outs, momentum, weight, and maybe squash and stretch, and how they apply in a simple 2d animation like this. Our tutor for this lesson was absent so we followed tutorials given by the tutor to practice what we would have been told directly.
We were first shown how to animate a cube rolling across a flat surface, showing and practicing our underlying understandings of pivot points and rotations and translations within After Effect.
we were then shown the practice of how the cube would roll down a hill in the next video. this second task I feel, helps us practice more nuanced aspects such as momentum and how the cube may speed up/slow down over time.
Here are some experiments with After Effects that I have done to help get better at 2D animation for preparation for my music assignment where I will be creating animated visuals for a music piece created by the music students at Ravensbourne.
I felt with this experimental task I successfully achieved simulating a realistic feel to an animated object.
By carefully manipulating the tangents of my splines, i created very smooth ease in and ease-out animations.
the solid cube i felt gave the illusion of a what i imagined to be heavy metal cube by having very slow ease in and out transitions.
the light cube had very realistic and airy movements by illeviating the look of gravity and contructing the splines to be raised from the ground more often to demonstrate more air time, as a lighter object would have in reality.
I pushed my skills furthur by establishing this peice in 60fps.
this requires additional care of the placement of keyframes and the angles of tangents, as well as the timing between keyframes.
I felt this choice and extra effort has resulted in a much more fluid and lifelike animation.
Wednesday Febuary 7th
This lesson involved a new instructor and tutor who this time showed us how to design simple visuals and shapes and forms within Adobe Illustrator, and also how to manipulate the shapes around pivot points to help position where we want and align where we need for our desired look. I have noticed many similarities between the pivot points within After Effects and Illustrator and even how they compare to the pivot point found in Maya: by having a separate location point to pivot, scale and translate around.
After querying with my tutor he made it very clear the pivot points in Illustrator are just for getting a final set position if the image on your page, which differs from in After Effects and Maya, where it is primarily used for the animation around an axis.
Creating assets in Illustrator to import to After Effects is a common process and due to this process, we were made aware of the cautions surrounding the methods we use to create the layers in Illustrator. such as creating individual fingers in Adobe Illustrator, if we actually want to be able to move each finger individually within the animation, as with objects exported from Illustrator they are often taken out as unmodifiable planes that can therefore only have simple modifiers applied and won't be able to move each individual finger if they arnt exported as separate fingers.
We were also shown an amazing website that is used to generate nice complimentary or contrasting colour palettes. very useful for simplistic shape colouring for our Music animation project:
Coolors.co. (n.d.). Coolors.co. [online] Available at: https://coolors.co/fce762-fffded-ffb17a-4f4789-201335 [Accessed 10, 2024].
Here is some experimentation and the requested work from the tutorials we followed:
Here, I felt I successfully and efficiently and accurately replicated the shapes of what we were given, the only shape that took extra time was the heart shape, as there are plenty of ways to approach this simple shape, by either using a square and keeping one corner as the bottom point of the heart, and warping the other corners in to form the bends, or other methods such as creating two semi-circles and putting them on top of a triangle, for my attempt I decided to go with the latter and practice the joining of shapes, this resulted in a few pixels being not aligned with the original heart, but resulted in slightly more slimmer and rounded edges, which I felt fits the heart well and successfully replicates the given image to a good standard.
I have not replicated each color to exactly as it was given on the tutorial as I want to make it clear that I have remade each shape, as when I cloned every color with the eyedropper tool, it seemed like I had just drawn an outline around all the shapes with a pen tool. This re-adjustment draws attention to the skill I demonstrated at recreating shapes accurately.
In class, the tutor demonstrated how we could recreate the Ravensbourne logo easily in Illustrator and move assets easily over to After Effects. I decided to follow these steps by animating a simple alien running quickly across the screen. This is the second experiment I have done, which initially started in Illustrator to create the individual shapes, and then moved over to After Effects, to see how I could quickly animate simple-like shapes to form simple animated visuals. this very rough test project demonstrates good knowledge and skills of efficient parenting in After Effects and with good designs in Illustrator.
While working on a simple shape in Illustrator for my PLP project, I encountered some challenges that initially made me reconsider using this software as my sources of imagery. The issue primarily revolved around the on-screen appearance of what I was creating. It seemed to be a visual discrepancy rather than an actual problem with the vertices. In this instance, with the star shape I designed shown on the right, everything was meticulously aligned, and the vertices connected seamlessly together at each point. However, visually, the lines didn’t converge as expected. It appeared as though they were almost aligning perfectly, yet something was amiss, and I couldn't pinpoint why. Theoretically, all lines should meet at a singular central point, forming a perfect intersection, but that wasn’t the case. Despite precisely connecting each line to the vertices, the points at the edges and the central cross-section didn’t align perfectly, leaving the overall visual accuracy to be desired.
This experience, especially with seemingly straightforward tasks, momentarily discouraged me from delving deeper into more complex designs on this platform. I recognize that many software solutions offer workarounds for such issues, and I remain open to exploring Illustrator further. I believe overcoming these initial hurdles could pave the way for a more rewarding creative process.
Wednesday Febuary 14th
Today was a good day. I was anxious starting this project due to the vast amount of different genres that music students could make but was very surprised and happy with what the music students produced, as well as pretty good explanations to their pieces, with some words, feelings and ideas they gave alongside the music.
I chose to listen to the tracks first before looking at the words and notes they gave to establish my own unique vision and interpretation of the music before being affected by other thoughts in my mind. Trying to see what I felt from the music, I found I was pleasantly surprised by the number of songs that were primarily instrumentals, i find I prioritise a good melody/beat over lyrics, as in this case, especially, it helps to let the animators ( us ) to once again have our own views before certain words were already put in our head by the lyrics, such as a sad song with sad lyrics making us confirm to ourselves that the songs sad, but without the lyrics, who knows, someone might interpret it as euphoric, boring, empty, or just too slow etc.
Our first musician was Alfred,
who had created some very nice elevator music and game music. When I first heard the elevator music, I thought the words or names that came to mind were more limbo music, which is like in a state of pause or on hold. Which I felt mirrors his interpretation and intention very closely; I can see how this type of music would fit for these scenarios. I was also very fond of his video game-style tracks, as being a player of the doom and crash bandicoot games as a child, I was incredibly pleased to hear his takes on these types of music in his perception and recreation of these tracks. instantly, it brought me back to when I was living in Wood Green in my childhood, when I was around 7 or 8, getting ready for a game of Crash Bandicoot on my PS1 sitting in front of our fireplace. We had at the time. or when I first got Doom on my first PC with its intel core 2 duo (very outdated PC chip). I remember back then, the computer I had had terrible speakers and listening to aspects of this music again makes me reminisce of the earlier times and Although I appreciate the retro song tracks, the new interpretations created by Alfred brought me to another level of these feelings, almost as if I could see his pieces of music being incorporated into a higher res and newer versions of these games if they required music. his music had more thumping kicks, and much more defined tone, and after each 16 bars, there was slight variations that kept the viewer interested in the repetitive game music.
Our second musician was Adam,
and oh my, had he created quite a variety of different genres, or as I like to say, tempo. He had songs that stuck out to me as super polished house tracks, all the way to experimental trance tracks; he seemed to use tempos around the 100-150 mark and had a very nice inclusion of voices that, although they had words, weren't as clear, so still left interpretation to the listener, this vocal house track he made I felt took me into another place and brought me back to many clubs I had been to before, the level or clarity and polish he had on his house track called: energetic-1 it has a stunning level or production and mastering, it sounded so clean and professional.
I felt he also demonstrated huge creativity and initiative in his works, due to a clever merge of Egyptian flutes and trance drums and basses. He named this trance track Egyptian rave. And although I've never heard a combination and arrangement of sounds similar to this before, I find it really unique he has creatively executed this combination, by cutting out and alternating the sounds every 16 to 32 bars, which i find is also very atmospheric from the use of delays and repetition
Very nice euphoric sounds and experimental ideas
Our third musician was Prince,
I felt he made some Very nice foley and atmospheric sounds. Not only were we gifted with various animalistic and natural sounds, but we were also given tracks that had incredible spatial sounds with atmospheric depth. one of his tracks titled IDEA_2 was actually very interesting for its large use of animal sounds, such as elephants and some kind of a trumpet, which, to me this track resembled anywhere from Disney's The Lion King to Moana or The Jungle Book. It had a very summertime feel and also gave the ideas of being re-born and nature and circle of life, mostly because I think one of the rifts sounded closely similar to the lion king scenes when the baby lion gets held up on the cliff and celebrated.
Other types of his tracks included very neat funky Lofi-type beats; one example is Idea_7, which gave me a very retro Lofi, hip-hop feel, likely due to the intentional de-tuning of notes and instruments used within the layers of the tracks, and the spatial placement of sounds. Before he even explained and talked about his music pieces, I had already made a comment about how spacey one of his tracks sounded, which later he mentioned that he spent time making the tracks pan from left to right at times to help with immersion for the listener and creating a much more surrounded sound, which I feel takes the listener seemingly right into the middle of the environment that's creating the sounds.
Wednesday Febuary 28th
I really enjoyed this lesson, as I enjoy using most digital design applications. and this lesson was a more in-depth lesson with Adobe After Effects and Adobe Illustrator
I decided that I would not only follow the tutorial in class but also recreate one of the demo scenes given to test and learn how one would go about creating a simple 2D-style scene to help me understand the steps better if I were to create 3D assets for my music video animation.
the scene I chose to recreate was the nighttime scene with a moon.
We were asked towards the end of the lesson to create an inspiration or idea board that we might use to help us plan our music projects, such as colour, shapes, objects etc. My inspiration is shown further down this page on my Presentation.
This is where I started creating the shapes for the nighttime scene, experimenting with the shape builder tool and using triangles to cut into a bigger triangle to simulate different layers of leaves of the trees.
Here is my final recreation of the nighttime scene that was shown as a tutorial example. I enjoyed recreating this scene and it helped solidify the practice of creating scenes in Illustrator and using the tools.
here is some initial experimentation I did for ideas in Maya of the kind of uplifting euphoric energetic vibey vocals that I was testing to see which best fits this music and animation project.
Similar to my testing in Adobe Illustrator and Adobe After Effects, I am experimenting with simple shapes, colors, forms, and composition.
I have used transparent materials to represent beautiful diamond-like objects which I think helps simulate a lush delicate valuable vibe complimenting the beautiful vocals and uplifting beat.
the soft light pink, shimmering white glass, and purple-colored portals create a unique color pallet which I feel also adds to this effect.
The unique display of cloned objects are a result of investigating thoroughly into the Mash Nodes section within Maya.
In my thought process, I was thinking about my knowledge of sound design within audio softwares and thought, there's different tones and notes in the music and these have been able to be identified by softwares for ages, so I thought, what if animation software can interact with such information, and I mean aside from just aligning up the visual transitions manually in the video editor with the beats we hear, I wanted a more accurate way of my shapes reponding to my sound. It turns out Mash nodes have been utilised within maya for quite a long time and there is already ways to manipulate the attributes and link then to specific audio files. using furthur knowledge of how an audio wave can be broken down into highs, mids and lows, and how music has multiple track layers when being produced, I can use the separation and breakdown of these aspects to help drive the flow of the animation with more accurate movements creating a more immersive and coherent animatic music piece that precisely interacts between the audio and visuals to form a more biomorphic and organic, pulsating, and responsive experience.
Wednesday 28th February
This week I made my presentation:
4th March (monday review)
This week I presented my presentation, it was on a Monday this time and I presented it to Sanjay for my formative review. He gave some feedback about what colour choices I had in my examples compared to my reference images, there was more variety of colours in my references, this was because I had yet to incorporate the full spectrum of colours in my examples and was still in progress making it. unfortunately, I couldn't show any updates on my scene as I was waiting for the sound students to send me the individual layers to his song, so the examples I gave were very rough and weren't the latest tests, but I felt that I demonstrated some complex and abstract designs that compliment the audio strategically by linking not only animations but also colours and shapes to the music.
Wednesday 6th March
Today I continued work on my music project, trying to branch outside of the simple music visualizer designs and settings and tunnels that often make the user feel like they're travelling through the vortex of sound, but I wanted to now construct and experiment with some different layouts and compositions.
I had a thought that maybe I could add some stylish pop art lips that move in sync with the music, and maybe they would come in with a big impact when the vocal part starts, possibly having the camera travelling through the mouths in sync with the beats as they open.
After lots of experimenting, I came up with something much greater;
Utilizing a drawing of a pair of lips I did last year, shown here on the right, I planned to warp them into actual 3D lips within Maya, and then blend shape with the lyrics.
I decided to delve into the BlendShapes section in Maya, but not only this, but also the MASH section in Maya.
Now, these are pretty complex mechanisms for beginners, but i found myself not only learning fast but thoroughly enjoying creating different blend shapes for each vowel sound that the lips make, this felt to me like stringing up a puppet with the right strings to compose the exact movement you want. After creating the facial lip blend shapes, I had to spend time meticulously placing and bending keyframes within the graph editor to make the lips actually move and position themselves exactly as the vocals start, which took quite some time but ended up with a very fluid and sort of uncanny valley looking pair of lips, which I felt really add immersion to anyone listening and made it much more interesting.
So after travelling through portal-like vortexes, I'm going to make the animation cut or transition to the lips composition about halfway through, when the vocal segment starts.
Here I am creating the Blend Shapes for the lips shown below in Maya:
https://www.reallusion.com/iclone/help/3dxchange5/pipeline/04_Modify_Page/Face_Setup_Section/Setting_Lips_Shape_Data.htm
I found it very useful to first map out what vowels the vocals are actually singing in the music, and using the FACS facial blend shapes map list given by Reallusion,
to identify the right lip shape to fit with the right vocals.
The vocals in the track are singing: "is she gonna be my love"
Here's an example of how I break down the words for the blend shapes:
"is" - For this sound, I need a blend shape for the "ih" (as in "bit") phoneme and a slight "z" sound. Shown on the right
Wednesday 13th March
Today's lesson was just about carrying on with our music animation project.
During this lesson I spoke to my tutors about ways in which I could speed up my rendering times within Maya and my MASH nodes and Nparticle node setups. My tutor Alan gave me a good suggestion of using "baking" to set my node structures in stone and make my computer use fewer calculations because all the calculations are already made and it's just a map for the particles to follow now, which helps with render times. one downside I encountered with this method is the inability to reverse my steps as it is a destructive workflow past this point and would not let me edit the original attributes anymore, so this method may only be useful once I have already decided exactly what I wanted. which in an assignment which is about experimenting I wouldn't really want to cut off my options and therefore cut off my paths of creativeness in areas such as repositioning the particles to make a better shape etc, so I have omitted using this until right at the end of the project.
Quick update on how my scene is looking. Very bland in viewport mode. But very flashy and reflective when rendered.
This actually helps me organise things because I'm looking at a very simplistic view without reflections and shadows. But it does mean whenever I make a change I can't really see how the final image will look until I switch back to rendered mode.
after this point pretty much all my adjusting will need to be with my render view open and it's moving towards the finer details now which can only be seen in rendered mode; such as reflections and accurate lighting.
Another great suggestion was made by my tutor, Dan, who said I could try to composite and render different parts or layers of the animation and recomposit them back together in After Effects. thinking about how I would combine aspects in After Affects has led me to create a more efficient rendering and editing process, instead of rendering the whole animation as one big scene, I have cut it up into 3 different scenes, so that in the first scene I can show all the stuff at the start of the animation and delete all the rest of it from the scene so that it doesn't have as much to load when the first part will only render the first visible bits anyway and you can't see the end parts from the start anyway so I did this to the other 2 thirds also and removed the unnecessary bits from the others parts of the scenes I created to have a much faster rendering time by doing it in batches.
Here is an image of me selecting through a range of images I rendered and naming them to have a range of different directions to and artistic or visual choices to select from here you can see I am changing settings within my animation, such as the noise levels, the light bounce levels, and the shapes of the objects, as shown by the named images at the bottom. I went through different attribute sliders via trial and error, experimenting with outcomes and results in how they would look in the final render. Finding a good balance of efficient rendering time and image quality is a very time-consuming process in itself and has probably taken the majority of my time on this project alongside the actual rendering process.
Wednesday 20th March
This week, i wanted to expand on my animation by adding smoke to the left side of the screen.
the right side seemed pretty filled with the animated red petal looking particles but the left seemed quite bland. so i thought i would try to get a fluid simulation working. in this case, the fluid containter needed to contain a smoke-like gas that I created to represent the cold, or the breath coming out from the lips as she talks. i wanted this breath to be cohesive with the rest of the animation so i had to time and link up the Acceleration Emission Rate using the Connection Editor in Maya, to the Mash Audio Node Volume Output; this has the Wav file of only the vocal track, so the smoke puffed out of the mouth simultaneously alongside my Hand animated lips to overall give a much more immersive and "emissive" piece. I also connected the Opacity of the smoke to this volume Attribute using the Node editor which means when the lips are not talking the smoke is practically invisible and fades away.
Below are a few different style options I test rendered that I could go with and choose aspects from. I really like the infinite space look of the background in the 4th image. but I quite like the reflections on the tunnels in the second image. In the end, I decided to go with a mix of these experiments and choose one design that has the background like the 4th one and the reflections like the 2nd one. i feel this gave the emptyness of space slightly more clutter and filled up the screenspace with abstract, textured reflections quite nicely.
I had issues when getting the smoke to render in Arnold; when using GPU render mode, the smoke wouldn't show.
I spoke to my tutor, Alan, about the default fluid container box. He said it didn't really work properly with GPU render and is bugged, which led me to worry that I wouldn't be able to render the smoke with my project in time due to the CPU render mode taking an extremely long time to render each frame, about 30 times longer.
Below, I have done a test render to see what it looks like before adding the smoke or doing anything specific in After Effects.
Unfortunately, I didn't manage to get the smoke I created to be displayed in the final render, as Maya's built-in smoke plugin is seemingly pretty buggy and crashed a few times.
This single frame on the right here with smoke has every render setting set to 1, so there are no reflections and bounces, and yet this image running in CPU mode, the only mode Maya's built-in smoke feature works with, Took over 17 minutes to render, as you can see by the timer in the bottom left of the image.
When I set the project render settings back to what I needed to get the desired reflections shown in the video above, this one frame, took 51 minutes to render.
Not wanting to gamble on whether this would work in time, I decided to do some editing with After Effects, such as flashing and pumping (small zoom-ins), to give it more immersion.
Here i have done my pumps and flashes in after effects, the pumps on the screen are the small red spikes when the kick drum happens, and the fading lights are the blue lines that go up and down. Ifeel this had added to the atmospheric ambiance and realistic lighting, glares and effects that a real illuminating pumping disco ball would have, alongside the emitting vibrations with the kicks, that you would definately feel and would move you, as if a universe was actually emitting a bass filled song.
My finished project is on my Technical & Evaluation tab.