Showing posts with label Video. Show all posts
Showing posts with label Video. Show all posts

Tuesday, 5 July 2011

Growth

Over the last two weeks, I have been developing the concept for the final outcome of my Masters project. This will take the form of a short video (approximately 3-4 minutes in length) and is intended to showcase some of the cell visualisation work I have completed. In addition to this, I plan on creating a second video which will be a technical showcase of my skills and abilities and also contextualise the work I have completed.

Although I don't want to give away too much, the main piece will be split into two halves, titled Growth. My title concept can be seen below;

growth_titles

As a sneak preview, the video below shows one of the shots I am currently working on in After Effects. Using the VERL render farm, I have already generated most of the content I plan on using (in 1920x1080 lossless TIFF format) which gives me a great deal of flexibility. The original shot did not feature any depth of field and looked flat and uninteresting, whereas this updated version has much more style and visual interest;


So far, I haven't decided if I will include the dust motes/particles idea which I previously worked on. Although I have considered this option, I don't have any sequence renders which I can use, so I took a still frame and applied the effect, as a test. This frame test can be seen below;

growth_04_cameraMoves_1000

Moving forwards, I will continue developing the look and feel of these shots. I am also currently looking at using music to add impact to these shots, but this is an ongoing project and I plan on trying to finish work on the visuals first, so that I can get a good 'feel' for the experience I want to create.

Sunday, 3 July 2011

Stereoscopic Update

Following on from my early tests of creating stereoscopic content out of Maya (here), I have completed rendering and compositing of my full cell visualisation sequence.


The process of working with a sequence instead of a single frame was not hugely different, and I feel confident that I could apply an anaglyph effect to a scene in the future. Currently, as it is anaglyph, I have been working in grayscale, but moving into colour is something I would like to achieve.

I am planning on meeting with my project supervisor and programme leader, to discuss the possibility of showing stereoscopic material during the Masters show in August. I am not sure if this will be a viable solution,  but it is worth looking into...

Monday, 27 June 2011

RealFlow Character Fill

Over the last couple of weeks I have been working on a shot for one of the other students on my course. Kaye is working on applying an illustration style to an animated commercial, and her blog can be found here.

After creating her animatic, Kaye's opening shot showed a 3D character filling up with liquid, and she had decided to use RealFlow's liquid simulation tools to achieve this. I already had previous experience of working with RealFlow and was able to help her in building this sequence.

To begin with, I imported her character model (built using Cinema 4D) and created emitters in each of the feet. My target was to fill the model in approximately 180 frames, as this is what Kaye had advised. I started by filling the model using these two emitters, and adjusted the emission speeds to suit the fill rate I wanted. Around halfway up the model, it became slightly more difficult, as I had added two additional emitters (one in each hand) and then needed to try and balance the levels of the fluid (this involved manually keyframing each emitter and re-simulating the liquid several times, a time-consuming process). Upon reaching the head, a fifth emitter was added, to speed up filling the volume (as the narrow neck creating a fountain-like bottleneck). Finally, a polygonal mesh was added, and the radius size adjusted so that the mesh was not larger than the original model.

At this point, I passed the source files back to Kaye to implement in her own project. As an additional step I imported the RealFlow files into Maya and created a simple render to show the outcome of this short project;

Monday, 20 June 2011

Show Your Working (2)

Following on from my first 'Show Your Working' post (here), I was able to use the University facilities to render my high-definition Maya sequences, which could be composited to create a final concept.

After working primarily with Photoshop to create images which developed the style I wanted to create, I built a composition using After Effects which would further the same style, although being applied to an image sequence rather than a single frame.

I also took this opportunity to add a time counter to my sequence. This posed it's own problems though, as I did not want a timer that would count real-time frames or seconds/minutes. I wanted a counter which would refer to the appropriate timestep of the cell development process being shown (in this example, where 1 frame was equivalent to 5 minutes of simulation, or 1 second was the same as 2 hours and 5 minutes).

After reading various tutorials online, I was able to find an expression which could be used in After Effects, to achieve the outcome I wanted. I modified the expression to suit my own composition, and applied this to a new text layer, so that it was visible. There was some trial and error involved in the modification, as the time units did not seem to obviously correspond to our time system, so I ended up increasing the expression timesteps gradually until the counter responded as it was intended to. I added a second static text layer, explaining the units being shown on screen (for clarity).

The completed concept has now been built in both Photoshop and After Effects, and can be adapted to suit different render passes, or a longer scene, as necessary.

Saturday, 18 June 2011

Going Stereoscopic

Throughout this week I have been working through some Digital Tutors content, titled "Stereoscopic 3D in Maya" (more information here).

This course has been designed around generating material which can be used to create 3D images and videos (in my case, using the anaglyph method, which is viewable using red/cyan glasses). There was a lot of instruction regarding "safe" 3D which follows an accepted set of rules, and is designed to ensure that the output will not be uncomfortable to view.

Fortunately Maya has had time to develop its stereoscopic toolset before I started using it, making life significantly easier - some helpful features included the ability to adjust interaxial separation and the zero parallax value (which can be visualised using a coloured plane relative to the stereoscopic camera), aswell as showing the 'safe' area for objects to be placed within. There is also a preview mode which allows you to view/playblast anaglyph material before you commit to rendering. The rendering process is also relatively straighforward (and doesnt differ much from normal rendering), as Maya can batch render multiple cameras (centre, left and right).

The Digital Tutors content also gave a good overview of how to combine the left and right images, and colour them appropriately using both Photoshop and After Effects - ensuring that I could apply these techniques to my own work.

After completing the Digital Tutors course, I wanted to experiment with implementing these stereoscopic techniques into my workflow, so I started off with a basic test - a static scene with 5 cubes, randomly rotated and placed at different depths from the camera. The left and right eye renders were composited in Photoshop and can be seen below (don't forget your 3D glasses!);

cellVis_anaglyphImageTest

The next step was to test an animated sequence in After Effects. I created a new scene, with a cube rotating on multiple axes, a sphere moving forwards and backwards (along the Z-axis) and a pyramid rotating on the Y-axis. I chose these shapes and types of movements as it would allow me to see how well each of the different types of motion would work when finished. The completed video can be seen below;


I then chose to add a stereoscopic camera to one of my existing cell visualisation scenes. Unfortunately, when I first rendered this, I realised the cell material was almost black and therefore lost most of it's colour (and therefore depth). I modified the shader to use 50% gray, which worked significantly better. The following 2 images show a still frame taken from my cell visualisation project (the first image is stereoscopic, the second is 'flat' for comparison);

cellVis_mayaAnaglyphTest

cellVis_mayaAnaglyphTest_flat

So far, I have found creating anaglyph images mostly straight-forward (thanks to Maya's built in tools which make it much easier). I have learned a huge amount about the different types of stereoscopic 3D, and the rules that should be adhered to. Moving forwards I would definately like to try and apply these techniques to an animated version of the cells growing, although this will require significantly more render time to test... fortunately the render farm is working and I can take advantage of this again!

Wednesday, 8 June 2011

Creating Dust Effects

Continuing on from my recent work involving compositing in After Effects (here), I decided to explore the different methods of adding dust and/or bokeh effects - either in 3D or 2D based software. My goal was to find the most efficient method of adding the effects that I wanted, without having to learn a large amount of technical skills - allowing me to maintain a focus on the process of visual experimentation.


Initially, I had started by creating dust motes using a particle system in Maya (above). This gives the most flexible result, but is time-consuming. It does factor in object depth however, creating a 3D particle system which objects can be placed inside of.

As a different approach, I tried using the Particular plug-in for After Effects (first used here). This is much simpler to use, and I created a scene with dusty clouds (or could be used as a simple bokeh effect in the background). This created an entirely different result to the Maya particles, as it is not truly 3D, and cannot 'surround' my cell structures. It is much easier to get a nice-looking result quickly though.

The result of my Particular testing can be seen below. It is important to note that I was only testing motion and style, rather than colours, so all of these videos are greyscale purposely;


Finally, I used After Effects to composite the two 'passes' together, out of interest, and decided that although each approach has it's own advantages and disadvantages, they do work well together, because they do different things. The result of this can be seen below;


Although these are relatively short and simple tests, each method can be easily adapted and scaled to fit a larger composition, or longer timeline - something which will be important in the next stages of my projects.

Monday, 6 June 2011

Getting Particular

Between semesters, I decided to briefly sidestep my main programme of study, and play around with After Effects, using Video Co-Pilot as inspiration.

My aim was to take a break from all the technical development I had been working on - avoid scripting - and get back to visual development, so that I could continue down this route when returning to my programme of study at the start of semester 3.

Using an After Effects plug-in, called Particular (by Trapcode - more information here) I set out to create interesting particle-based sequences. To help in this process, I followed through a couple of video tutorials, and then adapted and experimented further to get something really nice looking.

At the moment, I am not sure if these techniques are something I will use moving forwards, but I am still exploring different options, especially when considering how I will present my work at the Masters Show later this year!

Below are two videos which show some of the experimentation that took place;


Sunday, 8 May 2011

Compositing Goodness...

Following on from my most recent post (here), I continued to develop the scene I had been working on as a 2D image in Photoshop (using 3D renders from Maya).

I then added additional layers/objects to the Maya scene file and organised the appropriate render layers. Image sequences were rendered, and imported into After Effects, where a 10-second sequence was constructed - using the same 'style' as the prototype image.

The completed test sequence can be seen below;


Moving forwards, I would like to continue this visual development, perhaps with the addition of camera movement and depth-of-field techniques.

Thursday, 5 May 2011

Having Fun!

In terms of technical development, my knowledge of 3D software, scripting skills and problem solving abilities have surpassed those that I need to be able to complete the projects I am currently working on.

This has given me the time and opportunity to focus on visual experimentation, bringing a bit more fun back into my work, and making it more interesting than searching through pages of MEL commands!

I have been experimenting with using the skills gained in the Going Live module, to enhance the output and presentation of my previous cell visualisation work - using my skills as a digital artist.

Starting with a previous data-set, I adapted one of my scripts to create locators instead of spheres. I then created a simple particle system and used a modified version of a script provided by the external examiner to 'attach' the particles to the locators. This meant that I could use Maya's own 'metaball' system - not strictly metaballs, as it is a particle render type called "Blobby Surfaces", but it gives a similar effect. The image below shows a beauty render of the blobby surfaces;

cellVis_data1_cells_locators_original

Once this model had been created, I started to experiment with shaders. After reading some articles in this months 3D Artist and 3D world magazines, I created an MIA mental ray shader, and added a mental ray fast skin shader (normally used for subsurface scattering) and adjusted the colours and attributes to create a suitable look.

I added lighting in the form of two area lights, which used the mental ray area light options to transform from squares into cylinders, 'wrapping' around my geometry. Decay was set to quadratic (to create more accurate lighting) and the intensity of the lights was increased significantly (around 4500 each).

The next stage was to incorporate dust motes floating around. This is something I could imagine in my head, but was not sure how to implement. I looked at adding this in post-production, but although this could be quicker, did not provide enough control (or use 3 dimensions). I created a new scene file and using a particle emitter, created a particle 'explosion' - the forces were then zeroed out, so that I had a static particle cloud. I added my own gravity and turbulence fields, and tweaked these until I had the movement that I liked.

Finally, I setup render layers to output the passes I wanted - a MIA shader pass, a second MIA with an outline style shader, and separate pass for dust motes. After rendering a single frame, I moved into Photoshop and started experimenting with compositing these passes together, to create the look I wanted. I also added some fake bokeh effects in the background, coupled with some randomly generated cloud textures. The final image can be seen below, looking entirely different to how it first started (above);

cellVis_data1_cells_locators_comp

At this stage, I wanted to make sure that I could recreate this look with image sequences, so I started work in After Effects. Fortunately I was able to mirror this image in video form, and can swap in the rendered image sequences when finished. By working in AE, I realised that I would need to add a matte pass for the cell geometry. Below, a short video shows the breakdown of how this shot was constructed, and although static, shows how a final video could look;


I have thoroughly enjoyed this experimentation, and I have created something I am really happy with - something very different to the first attempts (which can be seen in an earlier post here). Although I don't yet see this as a finished piece, I can already see ideas developing, and it is good to try new techniques and methods of presenting the same mathematical data... more importantly it is good to get back to being an artist, something that I did not realise I missed until now!


Sunday, 17 April 2011

Visible Progress!

Over the last couple of weeks, my role in the Going Live project increased significantly, and then stopped completely. All of the animated shots had lighting added, and then I added the render layers/passes and started feeding completed shots through the render farm (which was considerably faster than I expected it to be). Sound effects and music were then added by the sound team, creating our finished advert.

It took a long time to get there, and there were problems along the way, but I learned a lot (particularly about rendering and compositing) and I am glad we all got there in the end! Next week, we are due to meet with the company in their London studio and present our finished project - hopefully the feedback will be good!

As for my cell visualisation work, this has been making good progress since my role in Going Live has lessened.

The first data-set I was working with, which represented cells and fibres in 2D space, now has fully working scripts, which are streamlined to work efficiently (or to actually work at all!). I am currently awaiting feedback on the video outcome of this work, so that I can decide where to take this next.

The other data-sets (involving cells, blood vessels, and oxygen density maps) have made even better progress. Again, after optimising my MEL scripts, the amount of data (several million lines of information) has become manageable, although time-consuming to process. I am currently part of the way through 'translating' this data into Maya's 3D environment.

An example render from the cells file can be seen below. This example frame is approximately two thirds of the way through the cell data, and incorporates some 'noise' on the cell surfaces to break up the uniformity (an idea suggested by the mathematician who provided the data);

cellVis_g_testPasses

As for the oxygen density, I decided to continue using a single polygonal plane for this, with grid points in the data having a matching vertex on the 3D geometry. The data then lifts/lowers each grid point/vertex between 0 and 1, where 1 is the most dense area of the oxygen 'clouds'.

The 'look' of these clouds is then controlled using one of two shaders.

Shader 1 ("Clouds") is coloured white, and uses a vertically-aligned ramp shader for it's transparency value, where 0 is fully transparent and 1 is fully visible. This means that as points on the vertex grid are changed in the Y-axis, their transparency is also changed (as they are moved higher, they become more visible).

Shader 2 ("Bands") expands upon this idea, and uses a second ramp for the colour (from blue to red, low to high). The transparency ramp is also 'sliced' into bands which are evenly spaced vertically - this means that only the narrow bands are visible, giving us slices of colour (where the colour is defined by where the slice falls on the colour ramp, rather than a fixed colour). This gives a result similar to the high/low pressure bands which weather presenters often use, but with colour added.

I have included a video below, which better explains these shaders - the white 'cloud' is shader 1, and the coloured 'bands' are shader 2;


Although this video shows a top-down view of the scene, it is important to remember that these effects are generated in 3D - moving forwards, I could include moving camera or changing points of view to highlight particular events.

Also, the oxygen density visuals are considered another 'layer' which I can add to the cells and blood vessels, creating a more complete final output.

I am not sure as to how this final output will look at the moment, as I am still developing the visual elements of each of the data-sets, but progress is good and things are at least working now...

Tuesday, 22 February 2011

Going To The Cinema

This morning, I stumbled across a fantastic website, called Molecular Movies. It describes itself as a 'portal to cell and molecular animation' and aims to provide scientists with tutorials on developing 3D visualisation skills.

Alongside tutorials, the website has a showcase section, with examples of 3D being used to visualise complex biological scenes - although these are interesting to watch, the scientific content is a bit beyond my own knowledge. Fortunately (and more importantly), it is useful to have a 'database' of the kind of work that is going on in my field.

Returning to the learning resources, I realised I had already covered the majority of content, as it was designed for those new to using 3D. That is, until I found an interesting article on using metaballs to create molecular surfaces, using Cinema 4D. Although I was not familiar with using C4D, I decided it was worth looking at it, as it was going to less technically challenging than learning Houdini (which is also reliant on Python), and should give a similar result.

Creating metaballs in C4D was fairly straight forward, and gave great results with very little input. It took a bit of time to figure out how to animate objects, and then render a scene, but after using the software, I would definately be confident using it again. The geometry was also significantly 'cheaper' than my testing in RealFlow, and could be used on a large scale. A video example of metaballs in action can be seen below;


The next step was to take my cell data-set and use metaballs to create a single organic structure - this proved substantially more difficult. Realising that I couldn't simply 'read' my data, as C4D also relies on Python, I opted to export both OBJ and FBX files from Maya, hoping that at least one would work.

These imported easily, and a metaball surface could be applied, but did not work properly (simply creating one large sphere). After vast amounts of experimentation, I scaled the imported objects... and success! A simple fix for what seemed like a complicated problem. This breakthrough meant that I could now take objects into Cinema 4D, use metaballs to create a surface, and either render, or export to Maya for render (as I am more familiar with the software package).

The only difficulty now, is that this workflow currently only works on single frames, and can't be applied to my animated data-set... this is the next problem to solve! A render of the results so far can be seen below (showing a before/after comparison);

cellVis_v106_meshTest_c4d

After my previous post, I felt like I had reached a brick wall, but after discovering the Molecular Movies website, it has helped me hurdle these difficulties... only to find a new hurdle waiting on the other side!

Monday, 21 February 2011

Reaching Limits...

This week has proven difficult in terms of my ideas being restricted in their implementation, primarily by the software/technology I have available.

Scripting has made a reasonable amount of progress. I have added colour changes to my first data-set, as it was not clear when cells 'appeared'. New cells begin red, and fade to their regular colour over 25 frames (1 second), which is more informative, and better communicates the data visually. It is important to point out here, that this change was added to the MEL version of my script. A render taken from the Maya scene can be seen below;

cellVis_v106_colourTest

I started developing skills in scripting using Python, as Houdini relies on this for data input. Python also crosses over with Maya as the shared 'language'. I began by trying to find the common links between MEL and Python, which accelerated my understanding of how to code effectively in Python and make things happen. The Maya-Python version of the script is now equivalent to the MEL script.

However, the Houdini-Python script currently only reads data, but does not yet generate or animate geometry - I am having difficulties finding useful information on the Houdini specific Python module (commands), so this part of scripting is on hold for now.

Due to these difficulties, I returned to Maya, and started work on trying to create an organic-looking cell surface - that is, a single surface which is created from all of the cells, and moves and acts as one (instead of 1067 individual cell 'spheres'). Initially, I wanted to generate a particle field in place of spheres, but, unfortunately, Maya does not allow transformation of individual particles within a particle object. There is no way around this, except by creating a separate particle object per cell - by doing this however, the particles can no longer merge together as one.

I started to consider other options to achieve this organic look, with RealFlow being top of my list. I generated a 'particle sphere' in Maya and exported this directy into RealFlow. After developing a complicated workflow of imports and exports, and a lot of experimentation, I was able to then use RealFlow to mesh this particle field. However, for one cell to be meshed fully, I required around 37,000 particles, generating approximately 110,000 faces for the mesh. With a scene containing 1067 cells, I realised that although giving a nice result, it was certainly not a viable option. A render of the style of cell-split can be seen below;


Beyond using RealFlow, my next efforts will involve using particles or metaballs in Houdini. Metaballs work very nicely (tested using 2-3 objects manually), and are 'clever' spheres which merge together when they are close to each other (without any configuration in Houdini, metaballs give a great looking result). Particles in Houdini will hopefully offer more flexibility than Maya, allowing me another technique to create the look that I want to achieve.

Although I have found this week frustrating, finding limitations in the software I am using, I am confident that I will be able to find a solution, allowing me to realise the full potential of the creative ideas I want to unleash...

Monday, 14 February 2011

Visual Experimentation

After figuring out MEL scripting, and using it to create an entire Maya scene for me, I chose to begin experimenting with compositing further.

This involved setting up render layers in Maya and testing how they work - creating different layers for specular highlights, shadows, depth passes, etc. After a bit of tweaking, I am confident that I will use this moving forwards.

These render layers were then taken directly into Nuke as image sequences, where I praticed reading and writing files, merging nodes, and also colour correction and grading - all of which will be relevant to my own work, and a crucial stage in presenting work to a professional level.

Simulating depth of field formed an important part of my testing, as this would be used alongside my cell visualisation work - giving the impression of viewing tiny microscopic objects. A short render of a composited file in Nuke can be seen below;


After ensuring I could work with render layers and depth of field, I started to look at possible textures to use with some of my cell visualisation work. Examples of these can be seen in the image below;

cellvis_v106_texturetest1

Although there is still a lot of work to do, my visual style is developing, and I am hoping to continue this experimentation, with a focus on adding colour. More importantly however, is how I use these attributes to present my work to a specific audience - this is something I will have to discuss with the mathematicians, as I will need to define their audience before I can design for it.

In addition to continuing my cell visualisation work, I have carried on developing my skills in using RealFlow. Below is an example of the type of work I have been undertaking;


I am still planning on learning how to use Houdini - primarily because my scripting has come to a halt in Maya. Within Maya, individual particles (as part of a particle field) cannot be manipulated as they have no Transform node. This means that I cannot use my MEL script to automatically generate and keyframe a particle field. Although I do not know much about Houdini, it seems more technically advanced, and I hope that it will help me take my script to the next level.

Saturday, 27 November 2010

Visualisation Techniques : Cells (Continued)

Continuing my experimentation with how my cells could look (first post here), here are three more examples which all use a spherical soft-body as a starting point. These examples also show a change in colour - something relevant to the mathematical data I am working with.


The first example combines previous render techniques, and uses a Cloud shader applied to the particles. Unfortuantely this gave the cell a glowing appearance, and had no distinct shape or outline.


The second example instead uses particles which are invisible, using a Blinn shader applied to the soft-body surface directly (the particles are used solely to drive the animation of the cell). A 2D fractal was used as a bump-map, ensuring that the surface was not too smooth and plain.


The third and final example builds on the second, using the same Blinn shader, also applied to the surface. The difference is that the surface material is created using a Layered Shader, which uses the original Blinn (made almost transparent) and a second copy which uses a Ramp Shader to adjust the transparency based on the object's facing ratio (making the shader less transparent towards the object edges). Although there are two shaders layered here, it gives a more interesting look - a transparent looking cell with a clearly defined outline.

Monday, 22 November 2010

Playing With Fire

Adding more special effects to my arsenal, Studio Projects Dynamics moves onto creating and controlling fire (using Maya fluids). Using fire is something which will be particularly relevant to a side-project I am currently involved in, and one of the tutorials involves burning down a 3D model of a house, which hopefully will be useful.

Starting off slowly, the first example introduces using fuel and heat within a fluid container. As this is not particularly exciting by itself, there is not much to show, so below we can see the second example - a simple flame created entirely in CG;


After working with a simple flame, a preset was created, and then imported into a new scene featuring a simple house model (provided on the tutorial disc). The flames were scaled up and tweaked to suit the new size. The house structure was already broken in segments, so these were converted to nCloth components, ready for destruction. Randomness was applied to the building 'burning down' by using keyframed ramp shaders with noise applied. The finished render can be seen below;


This chapter was one of the most important topics I have covered so far (and very relevant to the side-project) which allows me to transfer the skills learned during this tutorial, and apply them in a more creative manner.

Sunday, 21 November 2010

Inspiration 1 : Cell Visualisation

In collaboration with the University mathematics division, I am working on cell visualisation - starting off with mathematically generated data, I am importing this into Maya and defining the aesthetics of the scene, making the data more accessible and visually exciting.

Alongside my own work, I have found some examples of cell visualisation that I am particularly interested in. The first of these is a clip called "The Inner Life of the Cell", created in 2006 for Harvard biology students, by a company called BioVisions. Although this animated sequence looks dated, compared to today's standards, the content (and it's importance) are still just as relevant today. A tremendous amount of effort was put into to this project, and those working on it were constantly aware of the relationship between the quality of the visuals and the accuracy of the data. One criticism I would make, is that the scenes are often very 'busy' and feature lots of moving items and lots of different colours. Although this means there is more to look at, it can also make the shots somewhat confusing, as there is no clear focus. "The Inner Life of the Cell" can be seen below;



BioVisions have also continued working on molecular animations, with their latest video titled "Powering the Cell: Mitochondria" (a clip can be viewed here). This video is a significant update to the other one above, primarily thanks to the improvements in technology over the last four years. Although the concept is the same, the video has been output in high-defination, and this is definately a noticeable improvement. The visual style has also 'quietened' down somewhat, and is much more pleasing to the eye, as can be seen in the image below;


Moving away from this type of visualisation, I am particularly fond of "Nature by Numbers" created by Cristobal Vila. This is an expertly created piece of work, and focuses on how nature is driven by mathematics (at it's core). The content is of excellent quality, and there are segments where it appears that some sort of dynamics system has been used to drive the animation - something I am currently developing skills in. The overall look of the video has a very polished feel, something I would certainly hope to achieve by the end of my MSc programme! The video can be seen, in all it's high-definition glory, below;



After looking at other examples of work out there, it is clear to see that there is a great deal of importance placed on both the quality/accuracy of the data, and the appeal of the visuals. Trying to find this balance however, can pose difficult, and it is important for an artist to find an individual style which suits them. As mentioned in a previous post, this reinforces the importance of experimention - practice makes perfect.

Saturday, 20 November 2010

Visualisation Techniques : Cells

Experimentation is often the key to success, and computer graphics are no exception...

Recently, I have been manually building a 3D scene from some mathematical data (due to problems using MEL to import this automatically), so I needed to start work on how the data could be represented.

This first data-set features cells which multiply over time, and also change colour (which represents their type, or stage). Previously I had already tested changing particle colours (early results of this can be found here) so it was time to experiment with how these spherical 'cells' might look.

Working separately to the data setup earlier, I started off creating a polygonal sphere, and using a surface emitter to create the particles. I started by key-framing the emission rate, but then decided to output the required amount of spheres and set the initial state (so that we did not see the creation of the particles). I created a simply three-point lighting setup, and animated a camera moving through a 90-degree arc - the scene was now ready for aesthetic testing.


This first example shows the particles, rendered using the Blobby Surface render type. The Radius and Threshold were then key-framed and oscillated, to create a moving, pulsing surface. Ideally I wanted each particle to pulse individually, but I had difficulties in doing this. Using a Blobby Surface created a simple effect, with required very little computational time on render - something which might outweigh the 'awkward' pulse effect when hundreds of cells are required (and can pulse at different intervals to each other).


This second example builds on the first, although uses the Cloud render type. This was combined with a Lambert surface, and used the same animated Radius and Threshold. The cell was animated to rotate on the XYZ-axis, giving more variance visually. I preferred the effect created here, as it seemed more random, but there was not enough definition in the shadows or highlights, forcing the cell to appear flatter than it actually is. Also, several 'holes' appeared in the surface, which was an unwanted effect.


This third and final example is a development of the second, and uses a Ramp shader instead of a Lambert shader combined with a Particle Cloud node. The Ramp shader used the 'glass' preset, and was recoloured to be more neutral. I found that this video looked the best, and gave almost a glass-like look to the cell, with visible shadows and strong specular highlights. When the cell turns green, the glass outer-casing becomes more apparent, something which contributed to the overall style of the cell.

Although great progress has been made here, and I particularly like the third example, it took considerably longer to render. Also, the cell still 'pulsed' in an unnatural fashion - something I would like to correct moving forwards.

Although more work still needs to be done, I happy with the results so far, and it is always good to see progress being made.

Wednesday, 17 November 2010

Context & Review (Atmosphere/Mood/Design) : The Nightmare Before Christmas

Working individually this time, I was tasked with choosing a short film or clip that showed Atmosphere/Mood/Design in an interesting way.

After much deliberation, I decided to go with the 1993 stop-motion classic - "Tim Burton's The Nightmare Before Christmas". I decided to use the film's introduction, which featured the song "This Is Halloween", sung by many of the characters 'starring' in the movie. Unfortunately, the HD clip could not be embedded, so the SD version can be seen below;


Alternatively, if you absolutely must see the HD version, it can be found here.

Starting with the atmosphere of the clip, the story begins in a abandoned forest which features a narrator introducing the story (setting the scene). This use of narration immediately gives the film a fairytale atmosphere, and (almost) prepares us for the visual diversity and creative setting we are about to enter.

As we travel through the graveyard, one of the most prominent visual features is the carefully controlled lighting. There are lots of shadows and areas of darkness, which add to the anxiety of the scene, playing on the human fear of the unknown - there could quite easily be monsters hiding amongst the shadows. This effect is continued, with silhouette characters appearing on gravestones - we know they are monsters, but without details, our mind creates a stronger monster than is probably there.

After this graveyard scene, we are then introduced to monsters we can see, although these 'conform' to familiar stereotypes - such as the monster hiding under the bed, vampires or werewolves. The careful use of lighting continues here, and spotlights are used to draw focus and lead us through the scene. Coloured lighting is also used to highlight particular characters or props (such as the well, which glows green).

All of these elements contribute to enhancing the atmosphere of the world we are entering, which is clearly a dark, gloomy, monster-infested graveyard.

However, the mood of the clip is very different, and this is primarily thanks to the cabaret-like style of the song and music. Although the lyrics describe the atmosphere, and introduce the monsters, they do so in a light-hearted way. Bearing in mind the likelihood of a younger audience, this ensures that the night-time graveyard does not become too scary, and encourages viewers to continue watching.

The design of the characters and locations is exquisite. Although there are a huge amount of characters introduced in such a short space of time, they are easily identifiable thanks to a large amount of diversity in the character design. Also, all of the buildings are skew, and out of alignment, adding to the visual interest already created.

Most importantly however, is the careful use of exaggeration. Using Jack Skellington as an example, he is a skeleton character - although not dimensioned as he should be, he is 'extra-lanky' which makes him more interesting to look at, and certainly more memorable (he is definately a well-remembered, iconic character in today's culture). Halloween Town's Mayor is also another great example - he has different sides to his personality, represented by a head which has a face on each side, and rotates to a different expression as his emotions change.

One important point to consider here, is that even though we have this extremely colourful musical piece, during which we meet our two main characters (Jack and Sally), neither of them sing. By introducing contrast in this way, it identifies their importance, by showing that they are different from the rest of the characters.

Overall, I chose The Nightmare Before Christmas for many reasons, and without even realising what many of them were. After looking closer, it has only made me want to watch the rest of the film even more...

Sunday, 14 November 2010

Snow Is Falling...

After realising Gnomons 'Dynamics' series wasn't going to be a good source of learning for me, I decided to give Digital Tutors a try. After a quick look through their material, I started on their 'Introduction to Dynamics in Maya' lessons (more information here).

Looking at the individual tutorials within this lesson, there was bound to be some overlap. However, I felt that it would provide a good opportunity to consolidate the learning I already have, and fill in any gaps in my knowledge (I find Digital Tutors to be extremely thorough in explaining features).

After completing the first 5-6 lessons, I decided to experiment with some colour techniques which will prove useful for my project with the University mathematics division. Part of this project will feature cells which change type, and each type is associated with a different colour - I needed to find a way to animate an object between colours effectively. More importantly, I needed to find a way to control the colour changes in particles, as these are more likely to be used moving forwards.


The first example shows some simple 'particle rain/snow' which has keyframed colour changes, and works very well. Although the particle effects are not what I'm looking for, I was testing colour here, and this has worked as hoped. Since I already have Lambert shaders setup with the colour changes, I also needed to find a way to use these, as particles use Particle Cloud shaders. Fortunately I can simply plug the coloured Lambert into the colour input of the Particle Cloud shader, providing an additional level of control.


The second example shows something a bit more fun. After my experimentation in the first video, I realised I could make some decent-looking snow. After modeling a basic landscape, with a hill and some simple trees, I created a snow particle effect, with a small amount of randomness applied to it. After creating this short clip, it really made me wish there was snow outside!