Showing posts with label Image. Show all posts
Showing posts with label Image. Show all posts

Monday, 25 July 2011

Growth Compositing

Over the last few weeks, I have been trying to finish compositing "Growth".

As mentioned in a previous post, I have been working in Nuke to put all of the render layers together and adjust all of the visual attributes. Growth uses a variety of passes (rendered out of Maya), and these are shown below;

growth_f350_bgLayer

growth_f350_baseLayer

growth_f350_xrayLayer

growth_f350_glowLayer

growth_f350_matteLayer

growth_f350_dustMatteLayer

growth_f350_depthLayer

I was initially having problems adding depth-of-field (my frames were acquiring 'stepped' edges) in Nuke, but as part of our masterclass with Hugo Guerra, I was able to get Hugo's help in fixing this (well, it couldn't be fixed, but we found a way to work around the problem!).

Beyond this, I was able to get some great feedback from two of my classmates, on how to improve the final output of my work.

Mark Haldane suggested that I 'grade' the cells, as they could do with a bit more contrast/definition.

Matt Cameron suggested that I should try adding motion blur and chromatic aberration.

At this stage, I had already created Nuke scripts for each of the shots, which would combine all of the render passes, add a background and calculate the depth-of-field.

I then created a second Nuke script, which would take this first rendered sequence and add the chromatic aberration, motion blur, and film grain.

growth_nukeScript

The image below shows one composited frame, after it's been through all of the stages outlined above;

growth_f350_comp

Compositing has definitely transformed the outcome of my project. Although my main abilities are in 3D, I also realise the importance of compositing and how being able to use these additional skills can improve the presentation of my 3D work.

In the case of "Growth", the advice I have received has definitely been good, and is helping me to take my 3D work to the next level!

Wednesday, 20 July 2011

Growth In Stereoscopic?

Growth in stereoscopic? Wouldn't that be interesting...

After creating simple stereoscopic/anaglyph sequences previously (here and here), I decided that the dynamic flowing camera moves that I created for Growth would really take advantage of stereoscopic 3D. These shots are designed to view the cell growth and development from unusual close-up angles, and are animated to 'fly' close to the cell surface.

After speaking to my classmate Mark Haldane (who has previously created anaglyph material) and getting some feedback on previous stereoscopic efforts, I decided I wanted to test how it could look, so started work on setting up a still image.

I tried to enhance the amount of depth created by the stereoscopic camera setup, as there was not enough previously. Because some of my shots had cells moving off the edges of the frame, I wasn't sure how well they would work, but Mark suggested a way I could work around this problem.

Below are two images; the first showing the anaglyph effect applied to the original cell style I was using, and the second showing the effect applied to the new cell style (as a comparison).

growth_anaglyph_before

growth_anaglyph_after

With the Masters show opening just over a month away, I have started thinking about what content will form part of my show. Stereoscopic material is something I would really like to exhibit, but this comes secondary to completing Growth and it's 'Making Of' film.

Dependant on my time, I will continue to experiment with anaglyph renders, and hopefully have something worth showing!

Sunday, 17 July 2011

Growth Visuals

Now that the timing and content is close enough to being finalised, I have spent some more time polishing the visual style of the cells.

Feedback has shown that the visuals are a bit 'murky' at the moment, and there needs to be more definition - something to make the cells 'pop' and stand out. The cells move in a really unique and interesting manner, so it is important that this is the focus of the viewer (so that all my hard work can be seen!).

My project supervisor suggested I take a look at a DVD in the library by Jeremy Vickery, titled "Practical Light and Colour" (more information here). This DVD discusses the fundamentals of light and colour, and how a better understanding can improve practical work. If anything, it serves as a fresh source of ideas and inspiration, which will hopefully develop my cell aesthetics and drive towards the best outcome possible.

After watching this DVD, I started experimenting with colours and light in Maya, trying to develop something which was much more interesting, less murky, and had more focus on what I wanted the viewer to see.

Below are two renders; the first showing a still from Growth before this DVD/experimentation, and the second showing a new version, which uses entirely new lighting, colours and Maya shaders to give a different style completely.

growth_visuals_before

growth_visuals_after

I have yet to test this new style on an animated sequence, but I definately prefer the clarity that this updated version offers. The cells have significantly more contrast (against the background) and are less flat - by adding an edge glow, they really draw the viewer's attention.

Thursday, 14 July 2011

Growth Progress Update

Since my previous post, I have continued working almost exclusively on 'Growth'.

All of the cell visualisation stages in Maya are completed (e.g. scripting, dynamics and render setup), and all the camera work has been (pretty much) finalised. Most of the rendering has now taken place, and I have spent a large amount of my time compositing the different shots and passes, using After Effects.

This uncovered several problems in the 3D work, such as dust particles floating in front of the camera, or 'through' the cells (which looks like they are being absorbed). These issues have taken longer to fix than expected, as it has meant rotoscoping out certain areas of some image sequences (primarily the depth passes). Although this is a time-consuming fix method, it is certainly much faster than recreating the particles and their movement in Maya, and then re-rendering several layers to be composited again.

One of the main issues I am currently experiencing is the use of depth of field in my shots. I want a really shallow depth of field, giving the correct impression of (macro) scale. After Effects has a feature called Lens Blur, which allows me to composite using Maya's depth pass. Unfortunately, this feature is quite limited in terms of the amount of effect that can be applied, and isn't enough for what I am trying to achieve.

Because of this, I have started setting up one of the shots in Nuke. I tested this by using After Effects to render out a composited sequence, and applied the depth using a ZBlur node in Nuke. I was much happier with results that Nuke gave (and the use of the 'focal-plane setup' feature) so I have decided to rebuild the shots properly, using only Nuke.

Since most of the composition is blocked out and timed in After Effects, I will render each shot entirely out of Nuke, and take the completed shot sequences back into After Effects (which is the most straight forward way of working for me). The previous alternative would have been to create the shots in After Effects, render out composites for Nuke, apply the depth and tweak the shots, then bring them back into After Effects - a much more complicated process. I still want to complete the final project in After Effects, as I am much more comfortable laying this out on a timeline (which is already done anyway).

As part of my testing, I have generated 3 versions of the same frame to show the visual differences. The first is the original composited shot, rendered out of After Effects. The second image shows the outcome of using After Effect's Lens blur to add depth of the field. The third image shows the outcome when using the ZBlur node in Nuke.

growth_dofTest_original

growth_dofTest_ae

growth_dofTest_nuke

One thing that became apparent using Nuke, is that not only did it allow me to apply depth of field in the way I wanted, it created less banding in the rendered TIFF files. Banding is something I have struggled with so far (because of the subtle variances in colour), so this is another advantage of using Nuke!

Moving forwards, I plan to continue compositing the shots initially in Nuke, and then bring everything together in After Effects. This should be the best way of achieving the look I want, and avoiding the banding that was present in After Effects.

The next step beyond this, is starting work on a 'making of' video (for the degree show), which will contextualise the content of Growth, and provide more information on the methods used during it's creation.

Tuesday, 5 July 2011

Growth

Over the last two weeks, I have been developing the concept for the final outcome of my Masters project. This will take the form of a short video (approximately 3-4 minutes in length) and is intended to showcase some of the cell visualisation work I have completed. In addition to this, I plan on creating a second video which will be a technical showcase of my skills and abilities and also contextualise the work I have completed.

Although I don't want to give away too much, the main piece will be split into two halves, titled Growth. My title concept can be seen below;

growth_titles

As a sneak preview, the video below shows one of the shots I am currently working on in After Effects. Using the VERL render farm, I have already generated most of the content I plan on using (in 1920x1080 lossless TIFF format) which gives me a great deal of flexibility. The original shot did not feature any depth of field and looked flat and uninteresting, whereas this updated version has much more style and visual interest;


So far, I haven't decided if I will include the dust motes/particles idea which I previously worked on. Although I have considered this option, I don't have any sequence renders which I can use, so I took a still frame and applied the effect, as a test. This frame test can be seen below;

growth_04_cameraMoves_1000

Moving forwards, I will continue developing the look and feel of these shots. I am also currently looking at using music to add impact to these shots, but this is an ongoing project and I plan on trying to finish work on the visuals first, so that I can get a good 'feel' for the experience I want to create.

Wednesday, 29 June 2011

Composited Sequence

As part of my cell visualisation project, I have been working on a second set of data, which also represents the growth and development of cancer cells. This dataset however, focuses on the stage of which the individual cells are in, with the intention of identifying the most effective time for treatment to take place.

With thanks to one of the mathematicians working in this area, I was provided with a large quantity of data (around 33 million lines of numerical information) which had been generated, and could be used to drive my 3D visualisation. I created a MEL script which could 'read' each section of this data and then generate/animate the appropriate objects in 3D space.

After completing this stage, I then setup the scenes for rendering - adding lighting, placing cameras, adjusting render settings and creating render layers. Using the VERL render farm, I generated approximately 33,000 renders which could be composited together to create a completed 3000 frame sequence (depicting 600 hours of cell growth and development).

Using After Effects, I combined all of the image sequences and created several individual compositions which would be layered to create the final output. Because almost all of the work had been completed in Maya, there was very little that had to be done in After Effects, except combine the appropriate layers. Rather than generate a QuickTime MOV, I rendered a master image sequence (after speaking to one of my classmates) which could then be used to create suitable video files as and when needed.

Because the source of my output comes from unpublished mathematical models, I am unable to post any of the video online (it would be unfair to the ongoing research still being carried out). Instead, I have opted to include an image of a single frame, which can be seen below;

g_perspTop_2292

Over the next few weeks, I will be meeting with the mathematician that I created this for, as there are several upcoming maths conferences and events where my work could be used to illustrate the research they are carrying out.

Saturday, 18 June 2011

Going Stereoscopic

Throughout this week I have been working through some Digital Tutors content, titled "Stereoscopic 3D in Maya" (more information here).

This course has been designed around generating material which can be used to create 3D images and videos (in my case, using the anaglyph method, which is viewable using red/cyan glasses). There was a lot of instruction regarding "safe" 3D which follows an accepted set of rules, and is designed to ensure that the output will not be uncomfortable to view.

Fortunately Maya has had time to develop its stereoscopic toolset before I started using it, making life significantly easier - some helpful features included the ability to adjust interaxial separation and the zero parallax value (which can be visualised using a coloured plane relative to the stereoscopic camera), aswell as showing the 'safe' area for objects to be placed within. There is also a preview mode which allows you to view/playblast anaglyph material before you commit to rendering. The rendering process is also relatively straighforward (and doesnt differ much from normal rendering), as Maya can batch render multiple cameras (centre, left and right).

The Digital Tutors content also gave a good overview of how to combine the left and right images, and colour them appropriately using both Photoshop and After Effects - ensuring that I could apply these techniques to my own work.

After completing the Digital Tutors course, I wanted to experiment with implementing these stereoscopic techniques into my workflow, so I started off with a basic test - a static scene with 5 cubes, randomly rotated and placed at different depths from the camera. The left and right eye renders were composited in Photoshop and can be seen below (don't forget your 3D glasses!);

cellVis_anaglyphImageTest

The next step was to test an animated sequence in After Effects. I created a new scene, with a cube rotating on multiple axes, a sphere moving forwards and backwards (along the Z-axis) and a pyramid rotating on the Y-axis. I chose these shapes and types of movements as it would allow me to see how well each of the different types of motion would work when finished. The completed video can be seen below;


I then chose to add a stereoscopic camera to one of my existing cell visualisation scenes. Unfortunately, when I first rendered this, I realised the cell material was almost black and therefore lost most of it's colour (and therefore depth). I modified the shader to use 50% gray, which worked significantly better. The following 2 images show a still frame taken from my cell visualisation project (the first image is stereoscopic, the second is 'flat' for comparison);

cellVis_mayaAnaglyphTest

cellVis_mayaAnaglyphTest_flat

So far, I have found creating anaglyph images mostly straight-forward (thanks to Maya's built in tools which make it much easier). I have learned a huge amount about the different types of stereoscopic 3D, and the rules that should be adhered to. Moving forwards I would definately like to try and apply these techniques to an animated version of the cells growing, although this will require significantly more render time to test... fortunately the render farm is working and I can take advantage of this again!

Thursday, 16 June 2011

Show Your Working (1)

In this post, I wanted to show some of the steps that I undertake when developing my visual ideas.

The following images will detail the different stages of progress when developing a new style for my cell visualisation project. Although I have discussed the key stages, each stage featured multiple iterations (some stages had between 10-20 versions), with subtle differences (particularly when adjusting the 3D elements).

cellVis_shotDev_01

Image 01 shows the initial idea created (quickly) in Photoshop. I was currently working on something else, and this idea had to be 'written down' so that I could develop it fully later. I created a simple version in Photoshop using a radial colour gradient, and basic spheres (to later be replaced with my cells). A simple white highlight was painted onto the spheres, blurred and then lightened to suit.

cellVis_shotDev_02

Image 02 shows a developed version of this 'sketch'. The background has been enhanced in Photoshop, by adding a cloud effect to break the uniformity. The handpainted spheres have been replaced by 3 spheres created and rendered in 3D, using a customised/tweaked Blinn shader.

cellVis_shotDev_03

Image 03 is almost the same as the previous version, although with the addition of an HDRI environment sphere (to create more interesting reflections). By adding the HDRI, both the scene and lighting become more believable, and less 'flat'. A subtle vignette was also added to the image, to help direct the focus of the audience.

cellVis_shotDev_04

Image 04 applies the new visual style to the 3D cells, whilst using the same background used in image 3 (as this required no further development).

cellVis_shotDev_05

Finally, Image 05 adds a whiter outline to the cells for clarity (by adding a 'xray' render pass over the top during compositing). Pre-rendered particle dust has also been added, although I am unsure if this will be added in the final outcome.

The last stage I would normally complete would be to transform this image into an animated version. Although I have built the final scene using After Effects, I have been unable to batch render the Maya files, as the render-farm has not been working. Once I am able to render, I will post the animated outcome of this concept's development.

Tuesday, 14 June 2011

Alternative Styles (2)

Continuing with my development of the visuals that will be used as part of my cell visualisation work, I created another example of how the cell growth could look.

This time, however, the example is intended to be used solely as a still image - a large, high-resolution print. By removing the animation, it creates an entirely different outcome, and allows me to present my work using different techniques and media.

This particular approach also continued the idea of negative space, with the actual cells not being the main focus (as first mentioned here).

The image was created primarily using a matte pass out of Maya (to create an outline/silhouette version of the cells). The 'eclipse' effect was added using both inner and outer glow in Photoshop, with a second silhouette on top, and then using a black-to-white gradient mask to make the eclipse stronger on one side.

A lens flare was added (for dramatic effect) and all layers were merged and desaturated. To break up the 'perfect-ness', a filter-generated cloud texture was added on top of everything (using Overlay as a blending mode), and then colour was added using a filled Colour layer.

Finally, my "xray" pass out of Maya was added on top (very faintly) to bring out some detail of the cell surface. The final result can be seen below;

cellVis_silhouetteFlareTest_1900

Although this version has been created using 1920 x 1080 as a resolution, my aim would be to create a significantly higher resolution version for print (if I choose to go down this route). However, I have had problems opening large images rendered out of Maya - previously, I had a 12000 x 8000 pixel version, but Photoshop could not open this file. I currently have a TIFF render image which is 6000 x 4000 pixels, which I plan to start working with soon.

Friday, 10 June 2011

Alternative Styles (1)

As part of the visual development of my project, I have started to explore different styles which could be used to present the outcomes of my technical development.

Using the final frame (1900) of one data-set, Adobe Photoshop was used to create several concepts, each of which present the data in slightly different ways. To highlight the differences, this first image shows the style of a very early render, followed by a more recent version (achieved mainly through compositing - more information can be found in a previous post here);

cellVis_v106_meta_hou

cellVis_data1_cells_locators_comp

The first of these newer examples (below) tries to maintain simplicity. Although similar to the previous examples, there is nothing going on except the cells growing and expanding, forcing the viewer to focus on this and only this.

I have also chosen to remove the blue colour from the cells, and instead apply a saturated grey colour - this colour choice is much darker and aligns itself more with the fact that it represents the growth of cancer, a potentially fatal condition. This new colour also suits the new background better, whilst still being clearly visible (due to the whiter edges on the cell surface).

From a technical perspective, the gradient background was removed, as this caused problems with banding once compressed. By adding a 'cloudy' background with a small amount of animation (as a final sequence could have) it would minimise visual defects. A subtle vignette was also added, helping to maintain the viewer's focus in the middle of the frame;

cellVis_compColourTest_1900

The next example uses the same cell appearance, though just with a different background. The background style was inspired by the images here, although has been modified slightly to suit my own tastes. This contrast between the cells and the background, makes the content stand out more - also, the background could potentially be animated (the 'light bursts' could move and/or change colour over time);

cellVis_silhouetteBurstTest_1900

Finally, I wanted to toy with the idea of using negative space (particularly as cancer is something which cannot be 'seen' naturally). So far, the styles created have concentrated on what we can see - instead, I wanted to create an image which had a focus on what we can't see (similar to using a silhouette cut-out). The idea behind this third attempt, is that the background bokeh-style effects could be animated and moving around slowly, but the cell growth would still be happening in the front of the frame. I have also previously conducted experiments, which could enable this background to gain motion (found here).

cellVis_silhouetteBokehTest_1900

These ideas have been developed as still images, although they could also be created as animated sequences. By creating images first, it acts as a 'prototype' and lets me see how the final result could look. Moving forwards, I plan on creating more images first and fully exploring the different looks that could be achieved. Once a final style (or more than one) has been chosen, I can then create that as an animated sequence.

Thursday, 5 May 2011

Having Fun!

In terms of technical development, my knowledge of 3D software, scripting skills and problem solving abilities have surpassed those that I need to be able to complete the projects I am currently working on.

This has given me the time and opportunity to focus on visual experimentation, bringing a bit more fun back into my work, and making it more interesting than searching through pages of MEL commands!

I have been experimenting with using the skills gained in the Going Live module, to enhance the output and presentation of my previous cell visualisation work - using my skills as a digital artist.

Starting with a previous data-set, I adapted one of my scripts to create locators instead of spheres. I then created a simple particle system and used a modified version of a script provided by the external examiner to 'attach' the particles to the locators. This meant that I could use Maya's own 'metaball' system - not strictly metaballs, as it is a particle render type called "Blobby Surfaces", but it gives a similar effect. The image below shows a beauty render of the blobby surfaces;

cellVis_data1_cells_locators_original

Once this model had been created, I started to experiment with shaders. After reading some articles in this months 3D Artist and 3D world magazines, I created an MIA mental ray shader, and added a mental ray fast skin shader (normally used for subsurface scattering) and adjusted the colours and attributes to create a suitable look.

I added lighting in the form of two area lights, which used the mental ray area light options to transform from squares into cylinders, 'wrapping' around my geometry. Decay was set to quadratic (to create more accurate lighting) and the intensity of the lights was increased significantly (around 4500 each).

The next stage was to incorporate dust motes floating around. This is something I could imagine in my head, but was not sure how to implement. I looked at adding this in post-production, but although this could be quicker, did not provide enough control (or use 3 dimensions). I created a new scene file and using a particle emitter, created a particle 'explosion' - the forces were then zeroed out, so that I had a static particle cloud. I added my own gravity and turbulence fields, and tweaked these until I had the movement that I liked.

Finally, I setup render layers to output the passes I wanted - a MIA shader pass, a second MIA with an outline style shader, and separate pass for dust motes. After rendering a single frame, I moved into Photoshop and started experimenting with compositing these passes together, to create the look I wanted. I also added some fake bokeh effects in the background, coupled with some randomly generated cloud textures. The final image can be seen below, looking entirely different to how it first started (above);

cellVis_data1_cells_locators_comp

At this stage, I wanted to make sure that I could recreate this look with image sequences, so I started work in After Effects. Fortunately I was able to mirror this image in video form, and can swap in the rendered image sequences when finished. By working in AE, I realised that I would need to add a matte pass for the cell geometry. Below, a short video shows the breakdown of how this shot was constructed, and although static, shows how a final video could look;


I have thoroughly enjoyed this experimentation, and I have created something I am really happy with - something very different to the first attempts (which can be seen in an earlier post here). Although I don't yet see this as a finished piece, I can already see ideas developing, and it is good to try new techniques and methods of presenting the same mathematical data... more importantly it is good to get back to being an artist, something that I did not realise I missed until now!


Sunday, 17 April 2011

Visible Progress!

Over the last couple of weeks, my role in the Going Live project increased significantly, and then stopped completely. All of the animated shots had lighting added, and then I added the render layers/passes and started feeding completed shots through the render farm (which was considerably faster than I expected it to be). Sound effects and music were then added by the sound team, creating our finished advert.

It took a long time to get there, and there were problems along the way, but I learned a lot (particularly about rendering and compositing) and I am glad we all got there in the end! Next week, we are due to meet with the company in their London studio and present our finished project - hopefully the feedback will be good!

As for my cell visualisation work, this has been making good progress since my role in Going Live has lessened.

The first data-set I was working with, which represented cells and fibres in 2D space, now has fully working scripts, which are streamlined to work efficiently (or to actually work at all!). I am currently awaiting feedback on the video outcome of this work, so that I can decide where to take this next.

The other data-sets (involving cells, blood vessels, and oxygen density maps) have made even better progress. Again, after optimising my MEL scripts, the amount of data (several million lines of information) has become manageable, although time-consuming to process. I am currently part of the way through 'translating' this data into Maya's 3D environment.

An example render from the cells file can be seen below. This example frame is approximately two thirds of the way through the cell data, and incorporates some 'noise' on the cell surfaces to break up the uniformity (an idea suggested by the mathematician who provided the data);

cellVis_g_testPasses

As for the oxygen density, I decided to continue using a single polygonal plane for this, with grid points in the data having a matching vertex on the 3D geometry. The data then lifts/lowers each grid point/vertex between 0 and 1, where 1 is the most dense area of the oxygen 'clouds'.

The 'look' of these clouds is then controlled using one of two shaders.

Shader 1 ("Clouds") is coloured white, and uses a vertically-aligned ramp shader for it's transparency value, where 0 is fully transparent and 1 is fully visible. This means that as points on the vertex grid are changed in the Y-axis, their transparency is also changed (as they are moved higher, they become more visible).

Shader 2 ("Bands") expands upon this idea, and uses a second ramp for the colour (from blue to red, low to high). The transparency ramp is also 'sliced' into bands which are evenly spaced vertically - this means that only the narrow bands are visible, giving us slices of colour (where the colour is defined by where the slice falls on the colour ramp, rather than a fixed colour). This gives a result similar to the high/low pressure bands which weather presenters often use, but with colour added.

I have included a video below, which better explains these shaders - the white 'cloud' is shader 1, and the coloured 'bands' are shader 2;


Although this video shows a top-down view of the scene, it is important to remember that these effects are generated in 3D - moving forwards, I could include moving camera or changing points of view to highlight particular events.

Also, the oxygen density visuals are considered another 'layer' which I can add to the cells and blood vessels, creating a more complete final output.

I am not sure as to how this final output will look at the moment, as I am still developing the visual elements of each of the data-sets, but progress is good and things are at least working now...

Tuesday, 1 March 2011

Snake In The Grass

...more specifically, a Python.

Continuing on from my Cinema 4D experimentation with metaballs, I abandoned that line of testing, as it wasn't going to be a viable option for the large amount of cells I was dealing with. Alongside Digital Tutors, I returned to Python scripting, trying to 'translate' my MEL script into something useable inside of Houdini.

After almost a week's worth of scripting, fixing things, and re-scripting, the Python/Houdini version of my data-loading script works, creating all of the necessary nodes in Houdini, and keyframing all of the animation. This allowed me to natively create metaballs, rather than try and convert existing scene information. Houdini handles metaballs exceptionally well, adjusting the viewport geometry on the fly (so it doesnt crash regularly like Cinema 4D). Using Mantra, I was able to render out the full 1900 frame image sequence, with frame 1900 shown below;

cellVis_v106_meta_hou

Although I am still learning to use Houdini, I have made breakthrough progress in using the software alongside Python, opening up a whole range of new opportunities. I have started trying to figure out shaders and lighting in Houdini (which seems more difficult than expected), and I created some example 'looks' for my meta-surface, shown below;

cellVis_v106_metaTextures_hou

I am due to meet with the mathematics department this week, where I will present all of my research, ideas, and generated media - bearing in mind, that they have not seen any of the results so far. I am hoping that this meeting will help inform my next steps, and direct my technical understanding to a new visual solution.

Tuesday, 22 February 2011

Going To The Cinema

This morning, I stumbled across a fantastic website, called Molecular Movies. It describes itself as a 'portal to cell and molecular animation' and aims to provide scientists with tutorials on developing 3D visualisation skills.

Alongside tutorials, the website has a showcase section, with examples of 3D being used to visualise complex biological scenes - although these are interesting to watch, the scientific content is a bit beyond my own knowledge. Fortunately (and more importantly), it is useful to have a 'database' of the kind of work that is going on in my field.

Returning to the learning resources, I realised I had already covered the majority of content, as it was designed for those new to using 3D. That is, until I found an interesting article on using metaballs to create molecular surfaces, using Cinema 4D. Although I was not familiar with using C4D, I decided it was worth looking at it, as it was going to less technically challenging than learning Houdini (which is also reliant on Python), and should give a similar result.

Creating metaballs in C4D was fairly straight forward, and gave great results with very little input. It took a bit of time to figure out how to animate objects, and then render a scene, but after using the software, I would definately be confident using it again. The geometry was also significantly 'cheaper' than my testing in RealFlow, and could be used on a large scale. A video example of metaballs in action can be seen below;


The next step was to take my cell data-set and use metaballs to create a single organic structure - this proved substantially more difficult. Realising that I couldn't simply 'read' my data, as C4D also relies on Python, I opted to export both OBJ and FBX files from Maya, hoping that at least one would work.

These imported easily, and a metaball surface could be applied, but did not work properly (simply creating one large sphere). After vast amounts of experimentation, I scaled the imported objects... and success! A simple fix for what seemed like a complicated problem. This breakthrough meant that I could now take objects into Cinema 4D, use metaballs to create a surface, and either render, or export to Maya for render (as I am more familiar with the software package).

The only difficulty now, is that this workflow currently only works on single frames, and can't be applied to my animated data-set... this is the next problem to solve! A render of the results so far can be seen below (showing a before/after comparison);

cellVis_v106_meshTest_c4d

After my previous post, I felt like I had reached a brick wall, but after discovering the Molecular Movies website, it has helped me hurdle these difficulties... only to find a new hurdle waiting on the other side!

Monday, 14 February 2011

Visual Experimentation

After figuring out MEL scripting, and using it to create an entire Maya scene for me, I chose to begin experimenting with compositing further.

This involved setting up render layers in Maya and testing how they work - creating different layers for specular highlights, shadows, depth passes, etc. After a bit of tweaking, I am confident that I will use this moving forwards.

These render layers were then taken directly into Nuke as image sequences, where I praticed reading and writing files, merging nodes, and also colour correction and grading - all of which will be relevant to my own work, and a crucial stage in presenting work to a professional level.

Simulating depth of field formed an important part of my testing, as this would be used alongside my cell visualisation work - giving the impression of viewing tiny microscopic objects. A short render of a composited file in Nuke can be seen below;


After ensuring I could work with render layers and depth of field, I started to look at possible textures to use with some of my cell visualisation work. Examples of these can be seen in the image below;

cellvis_v106_texturetest1

Although there is still a lot of work to do, my visual style is developing, and I am hoping to continue this experimentation, with a focus on adding colour. More importantly however, is how I use these attributes to present my work to a specific audience - this is something I will have to discuss with the mathematicians, as I will need to define their audience before I can design for it.

In addition to continuing my cell visualisation work, I have carried on developing my skills in using RealFlow. Below is an example of the type of work I have been undertaking;


I am still planning on learning how to use Houdini - primarily because my scripting has come to a halt in Maya. Within Maya, individual particles (as part of a particle field) cannot be manipulated as they have no Transform node. This means that I cannot use my MEL script to automatically generate and keyframe a particle field. Although I do not know much about Houdini, it seems more technically advanced, and I hope that it will help me take my script to the next level.

Wednesday, 2 February 2011

Biomedical Visualisation

With a new year, comes new ideas and refreshed inspiration!

Over the last couple of weeks, I have been developing my MEL scripting abilities - something necessary if I want to work with importing numerical data into Maya. Although daunting at first, things have gradually started to make sense, showing the logical development of the stages involved in trying to achieve my goal.

Last week, I had a major breakthrough in using MEL, and was able to create a script which would read one of the mathematical data-sets. The script works on a line-by-line basis, reading comma separated values and placing those into individual variables, which are then used to create objects and keyframe animation. Upon running the script, all actions are automated, and require no input from the user - this simplifies the process, and speeds up creation of a Maya scene greatly. It also ensures that no mistakes are made, as long as the script is correct and data is formatted consistently.

An example of the type of data being used can be seen below;

ExampleData

The purpose of this script is to convert raw numerical data into something visual, built in 3 dimensions. Currently, the script uses simple polygonal spheres to represent cells, although in the future this could be changed to use particles, or something different entirely. An example render of the scripts process can be seen below - the results are dramatically different to looking at thousands of line of numbers;

ExampleRender

At this stage, I was confident in my MEL writing abilities, so created 2 more (similar) scripts which would work with the other mathematician's dataset. This data is entirely different however, as the cells are placed in 2 dimensions, and instead of changing size/radius, they change colour based on a numerical value. This posed it's own problems, as I would need to have a new shader for every cell, if I wanted them to change colour individually. The scripts for this data work, but are still in early stages - no renders have been produced as of yet.

Beyond being able to get data into Maya, I was then free to experiment with the visual aspects of representing the data. I tested several techniques which would be useful later on, including using render layers (for alpha channels and depth passes), adjusting camera settings in Maya (to create depth of field) and compositing render layers using both Nuke and After Effects (using short image sequences). Although familiar with compositing techniques in Photoshop, I wanted to familiarise myself with these methods when working with videos and image sequences.

As the final part of my experimentation, I started working with the first data-set in 3D, and added a camera and some basic lighting. I rendered 3 passes - 'beauty', alpha and depth - and composited each of these layers using Photoshop. After some experimentation, I realised that I was happy with the result, and would be confident in replicating the visual style using finished image sequences. The completed composite can be seen below;

cellRender

Although I am only 2 weeks into this semester, I have made tremendous breakthroughs in my own programme of study, particularly with using MEL to import numerical data. I hope to continue this progress throughout the semester!

Moving forwards, I would also like to continue developing my technical skills and abilities - this will ensure that I have the best opportunities to create high quality work, with no restrictions on the software I can use to achieve this. I hope to continue working with RealFlow, and hope to have some visual examples soon. I also plan on developing skills in using Houdini, another 3D package, with a more technical focus.

Sunday, 5 December 2010

Finally, Scripting...

After avoiding MEL scripting for far too long, I started working through "Introduction to MEL" by Digital Tutors, in the hopes that I could finally make sense of the more complicated material presented in In Silico.

Full information on the lessons can be found here, as they covered a wide variety of topics. This included a look at MEL syntax, creating primitive objects, editing attributes, using WHILE/FOR/IF commands and then combining all of the taught content into a single project, which resulted in the ability to create a textured flower with randomised attributes, simply by clicking a single button. The image below shows some of these random flowers;

Flower Script Render

After completing these lessons, I feel much more confident in using simple MEL expressions, and I intend on continuing this development, which will hopefully allow me to create a more complicated script which will import large amounts of numerical data and automate the creation of objects and animation.

For those who may be interested (and I realise it may not be many!), here is the complete script used to create the flowers...

Wednesday, 24 November 2010

Research Skills & Methods : Research Poster

The third and final assignment of this module required me to create a poster which would communicate my research using visual methods. The poster built on work completed in the first two tasks (here and here), and is shown below;

Visualisation Poster

Sunday, 21 November 2010

Inspiration 1 : Cell Visualisation

In collaboration with the University mathematics division, I am working on cell visualisation - starting off with mathematically generated data, I am importing this into Maya and defining the aesthetics of the scene, making the data more accessible and visually exciting.

Alongside my own work, I have found some examples of cell visualisation that I am particularly interested in. The first of these is a clip called "The Inner Life of the Cell", created in 2006 for Harvard biology students, by a company called BioVisions. Although this animated sequence looks dated, compared to today's standards, the content (and it's importance) are still just as relevant today. A tremendous amount of effort was put into to this project, and those working on it were constantly aware of the relationship between the quality of the visuals and the accuracy of the data. One criticism I would make, is that the scenes are often very 'busy' and feature lots of moving items and lots of different colours. Although this means there is more to look at, it can also make the shots somewhat confusing, as there is no clear focus. "The Inner Life of the Cell" can be seen below;



BioVisions have also continued working on molecular animations, with their latest video titled "Powering the Cell: Mitochondria" (a clip can be viewed here). This video is a significant update to the other one above, primarily thanks to the improvements in technology over the last four years. Although the concept is the same, the video has been output in high-defination, and this is definately a noticeable improvement. The visual style has also 'quietened' down somewhat, and is much more pleasing to the eye, as can be seen in the image below;


Moving away from this type of visualisation, I am particularly fond of "Nature by Numbers" created by Cristobal Vila. This is an expertly created piece of work, and focuses on how nature is driven by mathematics (at it's core). The content is of excellent quality, and there are segments where it appears that some sort of dynamics system has been used to drive the animation - something I am currently developing skills in. The overall look of the video has a very polished feel, something I would certainly hope to achieve by the end of my MSc programme! The video can be seen, in all it's high-definition glory, below;



After looking at other examples of work out there, it is clear to see that there is a great deal of importance placed on both the quality/accuracy of the data, and the appeal of the visuals. Trying to find this balance however, can pose difficult, and it is important for an artist to find an individual style which suits them. As mentioned in a previous post, this reinforces the importance of experimention - practice makes perfect.