21.5.11

Blog Moved Now I've Decided on a Project Title

Like I said in my first post for this master's of architecture studio subject, I was only using this blog URL as a placeholder until I decided on a title for my final project. Now that I've picked it - evolvedDesignInformant - I've copied all relevant posts to a new blog at evolveddesigninformant.blogspot.com and will only be posting master's of architecture stuff to that blog in the future.

19.5.11

Experimenting with Rhinoceros Grasshopper Video 2 Uploaded

I decided to squeeze in a little more "fiddling time" in transit to and from uni yesterday. One of the first things I said I wanted to know how to make in Grasshopper was a smooth, undulating surface. Thus, the aim of this model was to make an undulating surface, with a few extra things thrown into the mix to expand the amount of things I'd need to figure out to get a finished model. If I recall correctly, the time taken for this model was about 90 minutes.





There weren't many hiccups through this video, since I'm getting more used to knowing where to look for the kinds of functions I want. The main sticking point this time was when I tried to give the undulating 2D surface thickness. My initial thought was to duplicate the surface upwards with a "move", and then try the "cap holes" function. However, that didn't work since "cap holes" only works for holes defined by planar curves, rather than intelligently joining vertices with edges, and the resulting closed polygons with surfaces like I'd hoped. I ended up realising an "extrude" was all I needed.

After that, the only confusing thing was figuring out how the solid functions work. At first, I couldn't figure out why two cylinders would no longer work for a solid union when I was trying to intersect them, but then I noticed that Grasshopper's cylinder primitive isn't a closed solid, so trying to perform a solid union for them when their open ends weren't exactly planar would end up failing. After using cap holes, that problem was fixed.

Another thing: I just took a look at PK's blog and saw he'd posted a video using Grasshopper's "Galapagos" capsule. I remember Russell mentioning it in a previous studio class, but I'd forgotten to check it out. Looking at some videos and fiddling briefly with it in Grasshopper to evolve solutions to various simple equations, it seems to be exactly the sort of thing I'd be interested in using to evolve different aspects of my final design!

I'll definitely be using it to see what I can do with it.

17.5.11

Experimenting with Rhinoceros Grasshopper Video Uploaded

The video finished rendering and uploading, so here it is!


Experimenting with Rhinoceros Grasshopper

I'm currently rendering a video made using Chronolapse while I toyed around with Grasshopper. Shortly into the video, I decided to give myself something to try to achieve by the end of it - a twisted rectangular prism shape with rectangles along it representing floors. Looking back over it and playing a little more with Grasshopper, I noticed there are a few tools I could possibly have used to make the final form much faster.

However, the main point of this stage of my computer modelling experiment is to get myself familiar with Grasshopper, rather than efficiently producing an end result in one go. As with any experiment, the criterion of success is what is learnt.

The main sticking point was when it came to combining two lists of transforms - one list of rotations, one list of translations - so that I'd get one list of transforms that each represented a rotation and translation. I found out how to combine an entire list of transforms into a single transform, but that was the closest I could get. So, in a bit of desperation and having never before programmed in VB.NET, I fiddled wildly with it in a script until I got my head around how Grasshopper let you handle its parameters and "output" - which is really just any variables passed by reference rather than by value.

The next sticking point was that I couldn't find an object reference that would drastically speed up the rate at which I could throw together the script. I noticed the IDE had an autocomplete list that would pop up as you wrote your script, but it wasn't really comprehensive enough for me to know what each variable or method in the list was for. After Googling for a while and finding nothing, I resorted to looking in the plugin folder for Grasshopper, and found RhinoCommon.xml under Plug-ins/Grasshopper/rh_common/RhinoCommon.xml (relative to the Rhino install folder). You'll probably see a few flashes of it opened in Chrome in the video once I upload it. It was somewhat helpful because I could easily Ctrl+F and enter the name of the type I wanted, and hit Ctrl+G until I found it. Admittedly, that took quite a few Ctrl+Gs at times, but on the way, I was finding a few useful tidbits here and there.

After I got past that point - and a few sundry errors to do with invalid typecasting of Transform to Vector3 - it was fairly smooth sailing.

I intend for my next experiment in Grasshopper will involve generating a form that's a little more complex, but I'm not quite sure what would be appropriate yet.

I'll put the Chronolapse video up in my next post when it finishes rendering.

16.5.11

Experimenting with Evolution Programming 2

After today's effort, I've made some significant steps with the evolution program you see milling away in the background whenever you move your mouse in Firefox or Chrome. For a simple description of how to interact with it, wave the mouse over empty space in the background to spawn new "creatures" (squares). Over time, the creatures consume energy and so start to shrink (like a dying flower). To feed a creature to keep it alive, wave the mouse off it and back on it again. If a creature stays alive until maturity (currently 0.3 seconds), it gives birth to anywhere between 1 and 8 children into the adjacent blocks around it. Birthing happens in a reliable manner (eg, most successful births make a child in the position directly below the parent), which you'll probably notice as the creatures appearing to fall downwards like Tetris blocks. Creatures come in two colours, determined by how implicitly nervous they are - blue creatures shiver more the more nervous they are, whereas orange creatures aren't implicitly nervous enough to shiver.

Currently, I have a system that emulates:
  • Creatures (each square is a potential creature, until one is either born there, or spawned there)
  • Genes (of three genes: metabolic rate, aggressiveness, and nervousness),
  • Inheritance of genes,
  • Replication (because of the above two),
  • Energy transfer (in the form of children taking energy from parents at birth, and children absorbing the energy of the elderly if they're occupying the same space when born, causing the elderly to die and be replaced),
  • Thermodynamic behaviour for the most part (eg, ignoring energy given by the mouse through feeding and spawning creatures, energy isn't created out of nowhere, and is only "lost" as useless energy).

Random variation could be added with some careful effort, but I suspect its results wouldn't really manifest in any way noticeably different from what you currently see.

I'm still working on getting a reliable "feeding" method added to the mix, beyond children eating the elderly. What I was trying for a while - and you can still see commented out in my JS draw() function - was a feeding method that let creatures eat adjacent creatures. This was working well, but it dominated the living patterns of the creatures. With only a few creatures near one another, a checkerboard pattern would almost immediately form, with babies being born into the "holes" and getting immediately eaten by an adjacent adult with more energy.

In the attempt to get feeding working in a more complex way, I added a randomised variable that let lower-energy creatures occasionally eat higher-energy ones, but it only either slightly dampened the checkerboard effect, or resulted in flickering graphical madness.

Other than that, I encountered a few interesting phenomena. The funniest was creating what is best described as a race of immortal zombies, because creatures, when dead, were still able to feed off the living creatures near them. Another thing that happened was an explosion in the creature population when I accidentally made it possible for a parent to give birth to more children than it had the energy for.

I should mention that this idea came from a Java Applet that I found. It didn't model evolution, but it did model thermodynamics. Here it is, linked from his website, Repeat While True.


To view this content, you need to install Java from java.com

About CellShades
CellShades is derived from the concept of cellular automata, showing how complex behaviour of organic appearance can emerge from a simple set of rules. Using the mouse, the user spills liquid onto a virtual petri-dish. If the amount of liquid on any position on the grid remains above a certain level for a prolonged time, cells will emerge there. These cells will move and consume liquid to harvest energy according to a set of parameters which you may change and toy around with.

The intensity of the liquid on the grid is visualized by a color gradient from orange to purple.

Interestingly, the Applet was made in Processing, which I've seen is installed on the FBE's computers. I don't have immediate plans to try it out since there's so much software I'm already planning to get my head around as part of these experiments, but it's definitely piqued my interest.

15.5.11

Experimenting with Evolution Programming

As part of looking into evolution in programming, I wanted to modify this blog to be a little better-themed to my design matrix. What I figured could prove useful is a system that adapts its form to the presence of other objects. To integrate that idea with this blog, I wanted to make a background that adapts its form the location of the mouse. Depending on how useful it proves, this could be reapplied later in 3D for form generation in my final project.

Currently, I've got a background that reacts to the mouse and uses the HTML5 canvas element. This was a test I threw together to see if it'd have any problems when working with Blogger or Firefox / Chrome. There's two canvases being used at the moment, but I'll roll back a version tomorrow so I can instead have one large canvas that will let the evolving block units interact more easily.

9.5.11

Attempt to Upload my Design Matrix Hosted in an Interactive 3D Environment

An idea I had for presenting my design matrix is to place it in a surreal 3D environment that reacts to the user's presence, and allows them to discover the design matrix in a format that's a little more interesting than static paper.

The method of representation could tolerate being more complex, giving the user more implicit incentive to search for the parts of the matrix. In a later version, I think it would make sense to let the user keep track of the parts of the matrix they've found, since they act like torn scraps of a complete document found floating through the ethereal landscape. Almost as though they're the recorded thoughts of a kooky, solitary scientist, torn up and strewn across the landscape. A little modification - appropriate sounds, slightly modified aesthetics, and extending the spaces defined by the ghostly building - would do this interactive 3D environment well in my opinion. I might also modify the cubes constituting the building so some of them fly into position automatically once the user is close enough, rather than holding their distance depending on how close the player is to their final position.

This post also serves as a testing ground for getting the Unity Webplayer (hopefully!) running on my blog. Failing that, I'll upload it to Kongregate.com under the guise of being a game, and point there instead.


Controls
Use W, A, S, and D or the arrow keys to move. Look around by moving the mouse. Jump by hitting the spacebar. If the framerate is choppy, right click on the game and click "fullscreen". Hit Esc to exit fullscreen mode.

Created with Unity.


Awesome, it works!

17.4.11

Some references for computing applied to architecture

A site that might prove useful for mathematical theory and programming theory is arxiv.org, which provides "Open access to 670,431 e-prints in Physics, Mathematics, Computer Science, Quantitative Biology, Quantitative Finance and Statistics", as described on its main page.

The site also mentions it has an RSS feed (amongst other things) for robots that automatically parse the archives, so that could prove useful after filtering the information through Yahoo Pipes.

Slashdot.org might also prove interesting to look at from time to time for technology-related news and popular, cutting-edge information.

Computing applied in architecture

The reason for my lack of a post so far about the interaction of the disciplinary views of computing and architecture is that I have had trouble narrowing down exactly what might prove to be a useful (or at least promising) interaction to pursue.

After some thought, I've worked out some areas of research for this part of my research matrix that were selected for personal interest and their future utility:

  • Researching the Theoretical Aspects of Computing: in particular, the mathematical concepts that form its base, or helped develop computing to what it is today. The ideas and concepts in this discipline could be abstracted and applied to architecture. Or perhaps not even require abstraction to apply to architecture, since maths itself is abstract.
  • 3D Visualisation and Interaction: I would like to devote some time in this course to working on 3D visualisations (renders, point tracking videos, and augmented reality) and interaction in a 3D environment (Unity3D, Crysis; to simulate a building's structurally feasible, intelligent response to occupants)
  • Research into and Use of Programming: Given my experience in the area, it would be a shame to not use it when it comes to the interaction of the disciplinary views of computing and architecture. In the past, concepts from programming have proven useful for me in generating good architecture, so I'm confident they will help once more. Furthermore, concepts from programming relate this point directly to the first point of researching the theoretical aspects of computing. If I broaden my knowledge in that area, I will have even more concepts to aid my design process.

14.4.11

Evolution applied in architecture

As one of the three disciplines I have chosen, I will now show some examples and attempt to eloquently express some thoughts about how I think the disciplinary views of architecture and evolution can interact.

To start with, my contention is that architectural design would benefit from a practical application of evolution. Plenty of building designs have been generated with a very conscious incorporation of automated evolution, but I have as of yet to find genuinely useful examples rather than something that's just coincidentally resulted in a pretty form. What is generally lacking in the explanations of such "evolved" buildings is what the criteria for survival were, what the varied genes were, and how many generations deep the evolution was carried out.

For a long time, I've been interested in the application of evolution in fields other than biology, used with a particular view to solve various complex problems in incredibly simple - though almost unintelligible - ways. For instance, one of the non-fiction chapters of the first "Science of the Discworld" novel (by Terry Pratchett, Ian Stewart, and Jack Cohen) discusses an exploration of evolution through a genetic algorithm approach to making an electronic circuit able to distinguish between two tones. The circuit's logic gates were the genes that were randomly varied and inherited from generation to generation, and the survivors were selected on their capacity to give a different output - 1 or 0 - to each of the two different tones, not caring which tone was given which value as long as there was a difference.

Early on, the circuit's capacity to tell the difference was non-existent or negligible. However, after sufficiently many generations, reliable differences began to arise. After only 4000 generations, the circuit would get the tones wrong barely 1 in 1000 times. At 8000 generations, there were no errors in tone distinction that were encountered. The resulting circuit, however, was very complicated and hard to understand - for example, a portion of the circuit was found to not be connected to anything else, but if removed always caused the circuit to stop working.

In my opinion, the most important part of the experiment, however, was the efficiency and elegance of solution that results from the use of appropriately constrained and defined evolution. The evolved circuit was far smaller (ie, had far fewer logic gates) than other circuits previously made to tell the difference between two tones.

After doing some Googling, I've found a few interesting sources that I'll look into over the holidays. They are:

  • An ecomorphic theatre as a case study for embodied design (paper located here): mentions some interesting historical precedent to generative design of architecture.
  • An Evolutionary Architecture (version of book released online located here): Covers some interesting concepts to do with the kinds of forms that can be generated, and some ways of using the internet to expose a 3D model to genetic variation.
  • Autotechtonica.org (link located here): Is currently under construction, but seems to offer a few simple existing neologisms and their definitions, which might be handy to glance over if you're trying to learn about the topic like I am.
  • Morphogenesis of Spatial Configurations (link located here): Talks about evolution when selecting forms based on building performance criteria such as structure and accessibility.

From the second source, I found an example of what I was talking about at the start of my post when I said "I have as of yet to find genuinely useful examples rather than something that's just coincidentally resulted in a pretty form".


An animated example of co-operative evolution by a network of computers. Pretty, but lacks useful information to explain what it is. (from http://www.aaschool.ac.uk/publications/ea/intro.html)


What I would like to create using the process of evolution for this masters studio is something of utility. I want to produce an intelligible analysis that can be clearly and specifically used to inform an architectural design. Ideally, the evolution will be applied to an area that doesn't already have its own simple solutions. I think it would be more exciting if it were applied to, for example, the problem of space organisation and linkage, which I have observed is often a point of unfounded contention between designers - eg, arguments about one space not being suited to be connected to another, and so forth.

Furthermore, it seems that evolution of useful aspects to a building would best be used as a design informant rather than a means of producing the end design in itself, since there are philosophical aspects of design that haven't yet been accurately encapsulated in formal systems (such as those used to found computer science and information technology). To clarify: I trust that sufficiently many generations of rigorously managed evolution would produce an effectively failsafe product, but if and only if the appropriate conditions of selection and the appropriate genes were known and also formally encoded in the process of evolution. However, culture and many philosophies have as of yet to be formally encoded in such a comprehensive manner. Thus, I would use evolution to inform my design process for those conditions of selection and genes which are known, but I would want to refine the design personally to ensure it suits the formally undefined constituents of philosophy and culture.

Something that seems a little more promising than the above animation is the paper on Morphogenesis of Spatial Configurations. Referring to the below image of a 3D model generated from Lindenmayer Systems (aka L-Systems) and genetic programming, it certainly seems to produce what looks like a far more sensible form, though I am unsure of what the original L-System's configuration was.



I'm really liking how many freely available online resources are turning up for this topic. There'll be a lot of reading to do, but I suspect I'll learn a lot in the process, which will hopefully save time when it comes to producing my own evolved design informant.

Building my learning machine, and the building as a learning machine

This idea spawned somewhat spontaneously while I was thinking back over a quote that's stuck with me for the better part of three years. It was delivered with such colloquial, intelligent profundity that I couldn't help but wholly absorb it, as well as the rest of the information delivered with the speech it came from (shown below).


"If you look at the interactions of a human brain, as we heard yesterday from a number of presentations, intelligence is wonderfully interactive. The brain isn't divided into compartments. In fact, creativity -- which I define as the process of having original ideas that have value -- more often than not comes about through the interaction of different disciplinary ways of seeing things." (Ken Robinson)



There are two aspects of this quote that stand out to me with respect to this final year studio and architecture. Firstly, that an original idea that has value "more often than not comes about through the interaction of different disciplinary ways of seeing things", and that this would be a useful way to build my learning machine (ie, the processes I will follow for research and development) for this studio. Secondly, that "intelligence is wonderfully interactive", and that a brain "isn't divided into compartments", and that these points can be used to conceptualise a building as a learning machine.



The Building of My Learning Machine

The first aspect is how I want to work through this master's year studio - the interaction of different disciplinary ways of seeing things to generate original ideas that have value. I will select three disciplinary ways of seeing things which seem to have promising potential when interacting with the disciplinary way of seeing things that is architectural design. Currently, the three disciplinary ways of seeing things I have chosen are:

  1. Biology,
  2. Evolution, and
  3. Computing.


The Building as a Learning Machine

This train of thought fits best under the discipline of biology, but still has some strong relationships with evolution and computing.

The idea of an intelligent, interactive building is an exciting one, and seems to be cropping up more and more these days (specific examples to be searched up later and added retrospectively so I don't break my train of thought). Most of the time, no one part of the building's operation and usage is wholly distinct and unaffected by the other parts. Thus, according to the definition Ken Robinson uses, it could be said that a building is like a brain. Historically, humans have been creating braindead buildings. Sometimes beautiful buildings, but braindead nonetheless. They operate, they breathe, and everything like that - but they're vegetables. They cannot respond to us. Or if they do, it's only in obvious terms, through wholly controlled interactions. Recently, however, technologically-minded interventions have introduced the capacity for reactions in the built form. It's as though some buildings and building elements are now recovering from a coma, and are starting to be able to autonomously respond in complex ways to interaction.

Though Ken's speech was explicitly about human learning and education, it is my contention that interactive, responsive architecture - occasionally stylistically classified as "high technology" - would benefit from such a conceptualisation. The building as a learning machine is an interesting, exciting idea. Due specifically to technological advances, built forms are capable of being dynamic, and adapting to their purpose. Through analysis of occupational usage - which in the case of this studio would likely be emulated through data obtained via social website analysis - buildings can be programmatically designed to modify themselves to suit occupational use.

The conceptualisation of the building as a learning machine can extend beyond this specific example. More generally, the building as a learning machine is capable of responding to its environment and, moreover, learning the trends. Conceptualised as a finite state machine, the learning machine's state would change in response to the input from its environment, with subsequent states depending on the previous states as a form of trend-learning. In a way, having the building as a learning machine relates somewhat to forming a practical application of phenomenalism.

There are many directions this idea could go, so to keep the project feasible, I will need to restrict myself to researching and developing one or a few specific examples.

You can find other speeches and presentations from TEDtalks here, or through the TEDtalks YouTube channel.

Note about blog usage

This won't be my final year blog. I will create my final blog once I have finalised - with certainty - the title of my final year project. Once I've done that, I'll create a blog with the same title - or similar if the URL's already taken - and re-post everything relevant from this blog into that.

Part of the reason I've used this blog for now is that it helps to express a continuum of ideas, linking on from the last subject I recently used this blog for - Augmented Reality, during the late summer term this year. I have had a long-standing affinity for technology, and its multifarious utilities as a tool in design and in making human life more productive, comfortable, and / or stimulating.

Part of what makes technology so interesting is just that; that it's stimulating. It's that stimulation which engages the mind - or, rather, the senses - and from there, the mind and the technology (if it's good technology) are putty in each other's hands. And more recently, it can even be argued that each learns from the other, through the rise of "machine learning algorithms" as a branch of artificial intelligence.

I think it's a pretty exciting thing to look into. Objects which are self-modifying, and responsive to their environment seem to be picking up interest in areas other than specifically computing. Other forms of technology are catching up with this idea. A simple example of this is that of a responsive facade like the CH2 Building in Melbourne, opening and adjusting the angles of its louvres in response to local climate internal and external to the building. The integration of responsive technology into our daily lives is becoming more comprehensive, and is - as just mentioned - becoming integrated into architecture.


The CH2 building again, at a different time of day. (image from http://inhabitat.com/ch2-australias-greenest-building/)

This is just one aspect of architectural design that I think is worthwhile investigating during this research studio - I'll be elaborating on other ideas I think are promising in later posts.

Also, I should mention that I'll be splitting the main points of my ideas into different posts to try and establish a thought continuum with useful landmarks, rather than a huge wall of text. On to the next post!

26.1.11

Problems Uploading Assignment 3 to Emustore, So I Used Mediafire (link here)

Click here to download the file for my assignment 3 submission from Mediafire, since the Emustore folder appears to be having the same problems as before.

Assignment 3 Final Submission

First Note: In the end, I had to go with my "failsafe" project idea. However, when I get back from Japan, I intend fully to continue working with AR to get the other project working. If I were able to spend time up to and including the 7th of February working on this assignment like everyone else, I'm certain that I'd be able to get my original "wouldn't it be cool if..." project working.

Second Note: For the submission to emustore, I provided two scenes for BuildAR. These scenes differ mostly in the number of markers they have, since in poor lighting, BuildAR was occasionally recognising one marker as another. If this problem occurs during the exhibition, try a scene with fewer markers in it.



THE FAILSAFE PROJECT IDEA DESCRIPTION:
A problem with purchasing furniture and ornaments is that the potential buyer doesn't get the chance to really know what they'll look like in their home next to everything else. Using Augmented Reality, this problem can be solved. The potential buyer needs only a printer to make a marker, and an Augmented Reality program making use of a camera on hand. In this case, BuildAR will be used to let the user place ornaments as they please, locating them with markers in the real world.



IMAGES REQUIREMENT:
Screenshots of the scene I set up to test the idea in my bedroom:






Some more markers:








VIDEO WITH SOUND REQUIREMENT:




REFERENCES REQUIREMENT:
Located in this post.

Video for Interim Submission

Here's a quick video captured with fraps for the interim submission video requirement:

References for Assignment 3

Here are the references for the models used for the Augmented Reality in assignment 3, some of which I modified the textures of or split into different files (but looking at the assignment's scene, it should be fairly obvious which file is from which reference because everything is fairly distinctive):

http://www.mr-cad.com/Model4-p-850.html
http://www.mr-cad.com/mrcent03-p-872.html
http://www.mr-cad.com/Baby-Crib-Furniture-011-p-1160.html
http://www.mr-cad.com/Bar-Chair-003-p-1145.html
http://www.mr-cad.com/Sofa-00021-p-1169.html
http://samus.turbosquid.com/3d-Models/3ds/max/xsi/c4d/obj




For the posters, I simply grabbed some pictures from Google images and applied them as textures to very flat rectangular prisms in 3DSMax. The references for the pictures for the posters are:

http://www.sintel.org/wp-content/uploads/2010/06/08.2l_comp_000465.jpg
http://www.licuadorastudio.com/blog/wp-content/uploads/2010/07/sintel_5.3b_0129_compara.jpg
http://1.bp.blogspot.com/_XwjmBjJAtPI/TKalkHol_EI/AAAAAAAAAtU/_OeJCw0brzQ/s1600/Captura-Sintel-1.png
http://www.chinahighlights.com/image/news/water-cube.jpg




Note that some of the pictures are from the free-to-download short movie called Sintel, produced by the Durian team in Blender. You can find download links of it here.

Images for Assignment 3

Here are some progress images for the interim submission of my assignment 3 project. They show the markers I'm creating to let people place lifesize objects in a real-life context with augmented reality.

Printable marker for the first barstool:


Printable marker for the second barstool:


Printable marker for the brown couch:


Printable marker for a Sintel poster:


Printable marker for another Sintel poster:


Printable marker for a small table:


Printable marker for a smaller table of the same design as the one above:

24.1.11

Failsave Project Completed, Just Needs Markers Printed and 3D Objects Scaled

The title speaks for itself. I'll definitely be grabbing some sleep now, since I've got to get up and drive in... three and a half hours. Wow.

Anyway, here's an example of the kind of marker the failsafe project will use:





A rendered image will replace the "render goes here" text, and the markers will all (hopefully!) be distinct enough to be different to one another. I think I'll recommend they be printed off so the markers are 16*16cm, since there's a good chance they'll be pretty far from the camera.

To make the project a bit more of a comprehensive example of the capabilities of BuildAR with respect to the specific matter of a potential buyer wanting to see how something would look in reality before buying it, I decided to do the following things:
- make some models come in slightly different styles, so the user can compare them in realtime (two different bar stools)
- make some models come in the same form, but with different materials (three kinds of couches, two with flat suede-like colours, one with a brown leather texture)
- make several posters available, since people can easily stick them up on walls and move them around
- to get the attention of people coming by for the presentation, I added a 1:1 model of Samus from the Metroid series. I'd imagine there'll be nerdy people taking photos of the monitor while posing next to it, or something. I guess that'll happen with quite a few of the displays, though

All models were found online, but needed to be handled and modified in 3DSMax so they would show up properly in BuildAR. I'll be providing a list of references for them with the assignment submission.

I'll be spending the next however much free time I have completing an assignment for another subject and getting the last bits sorted out for my overseas trip, and then if there's time, I'll take a crack and quickly whipping up that additive modelling Flash application.

Assignment 3 Proposal

I've been so focused the last few days on trying to get FLARManager properly importing 3D files that I forgot to post my proposal. Given I'm currently tossing up between two proposals - one easier "failsafe" one, one hard "wouldn't if be cool if..." one. I'll post them both.



The "wouldn't it be cool if..." proposal:

A problem with many Augmented Reality viewers is that they're very static, loading some given model or animation and either showing an unmoving model, or playing just one animation. My aim is to attempt to make something more capable of change over time. My aim is to make an Augmented Reality application that allows the user to interact with the model they're viewing in a dynamic way, modifying it and creating something new.



The "failsafe" proposal:

A problem with purchasing furniture and ornaments is that the potential buyer doesn't get the chance to really know what they'll look like in their home next to everything else. Using Augmented Reality, this problem can be solved. The potential buyer needs only a printer to make a marker, and an Augmented Reality program making use of a camera on hand. In this case, BuildAR will be used to let the user place ornaments as they please, locating them with markers in the real world.



The failsafe proposal is pretty much finished the moment I get enough models of ornaments, since it's simply a scene that contains markers associated with those models. Then all I'd need to do on the presentation day (or rather, the 20th when I get back from overseas) is print the markers out at the right scale, and place a sufficiently high-res camera such that it looks at a patch of floor and wall that lets the users move markers around. My idea so far involves making a few posters, and letting the user arrange things like couches, bar stools, and tables at 1:1 scale in the scene. To help the user more easily understand what they're doing, the printed markers should have an image of the 3D object next to the marker to indicate what they're moving.



As for the other proposal, I think I'll try working with the idea I had for making a very simple additive modelling application, since I've hit a brick wall with reliably importing well-formed geometry and textures in FLARManager.

For very basic functionality, I'd simply need to have two markers. One marker "carries" a cube to let you position it as you like. The other, when covered, places the cube, and when uncovered, creates a new cube for the first marker to carry. (Also, when the carrier marker isn't visible, no checks will be performed to place a cube.)

Furthermore, a third marker could be used so that, when covered, all cubes are removed from the scene to start a fresh model.

To help the user understand how to use this system, the markers would be labelled with simple names: "Cube Carrier", "Place Cube", and "Delete All Cubes". To help even more, some text would be needed, saying "To choose where to place a cube, move the Cube Carrier marker around where the camera can see its pattern. To place a cube where the Cube Carrier is, hide the Place Cube marker from the camera. To remove all cubes and start again, hide the Delete All Cubes marker from the camera."



I'm about 90% done with the "failsafe" project. I'll get cracking with the "wouldn't it be cool if..." project, and make a Chronolapse recording of myself doing it. Hopefully I'll avoid any more model-related issues by strictly relying on the in-house cube-making functions of FLARManager!

23.1.11

So Many Time-Consuming Issues with FLARManager, I'm Stopping Work on It

Since I've spent so long trying to get FLARManager and its various components working (including, now, its other files parsers, such as Max3DS), and everything I've managed to get appearing as a model has had some pretty major issues (even the example scout model), I'm going to stop work for now.

My plan of action is now to come up with something achievable with BuildAR, which should be a breeze compared to what I've been struggling with. I'll use it as a backup assignment that I can submit if I don't get this FLARManager thing working in time before I go overseas on the 28th. The backup idea is a solution to what's frequently a bit of a bothering problem; knowing what decor will look like in your hope before you purchase it. Given a set of furnishings as 3D models, my idea is to let the user select which marker they want to print out, so they can place it where they want and see how it would look.

One issue with this is that lighting would obviously vary compared to BuildAR's rather constant lighting. It'd be nice to see an app developed similar to BuildAR where you can control the lighting conditions, or even place point lights with markers.

Anyway, after I've gotten that BuildAR scene thrown together, I'll get back to this project and try to get it working once more... But the issues with the rendering are quite persistent. More than just files not loading, backface culling for some of them seems to not work exactly, as though the object is being rendered from one direction with the camera 90 degrees off to the side. Also, when two objects are intersecting in the scene, some intersection does occur, but it's frequently messed up by tris rendering on top of each other when they shouldn't.

I've got other uni work to do for a different subject, too, so that will have to be done before I can afford to spend any more time on this project.

Currently Watching a How-to Collada Video for 3DSMax

Hopefully this video will let me create textured, animated (or at least easily controllable) Collada files.



I've got to admit, I gave a spiteful laugh when he mentioned "I know there's a lot of frustrations out there with the Collada format...".

I've got hardly any time left over before I have to go overseas. I don't think I've even had time to check that I've got everything I need.

Try, Try Again

After a bit of looking for how to create textured Collada models, I found a blog here that seems to be having a ton of problems as well. It'd be much better if there were support for good 3D animations and textures, and not just a ton of competing, hackneyed, half-working exporters. But that might just be the fact I've been at this for four days, doing nothing else and making extremely little progress due to the DAE file exporting being a terrible experience.

It seems to indicate that 3DSMax is the way to go, so I suppose I'll persevere with it now. Now to figure out how, oh how, to get textures exported with a DAE. I followed one tutorial that said all you need to do is assign a standard material to the object in question, then apply a bitmap texture to its diffuse input. But no dice when it was being rendered in PV3D.

Nightmares with DAE Files

I'm having a lot of trouble producing a DAE model that will show up well in FLARManager. I have tried using Sketchup's DAE exporter, 3DSMax's built-in DAE exporter, an OPENCollada exporter plugin for 3DSMax, and have tried lots of different settings. OPENCollada will export animations, but I have no idea how to get it exporting textures that will be rendered by FLARManager.

What adds to my confusion is that when I exported a test cube with a simple texture from Sketchup, it worked. I then drew a rectangle on it and pulled the face to make a cup shape, and exported that. That worked fine. Then I drew a rectangle on it again and pulled that to make a wall inside the cup shape, and exported that. The model now doesn't load properly in FLARManager, even after Ctrl+Z-ing to the state that it worked in. I've done this several times, and it appears to consistently occur, so... what gives?

I've had to constantly scale back what I'm trying to achieve because of random little finnicky things like this that just crop up out of nowhere.

I even tried using Blender, but its materials apparently also don't get rendered in PV3D. How the heck do you get a DAE working in this program? There's an example animation of a low-poly model that is also textured, and it loads just fine and has no issues whatsoever. Checking its file in notepad++, I saw that it was made using 3DSMax 8, using an exporter called Feeling ColladaMax v3.04C. Unfortunately, that exporter appears to have evaporated from the internet.

So, I'll just keep battling ahead and trying to get any description of a half-decent model showing up. All I wanted is to associate two models of different levels in a building with one marker, so you can hold your hand over a "button" marker and have each level fly out towards you for inspection, but it seems like a ton of stuff is working against that actually happening. It's been about four solid days of trying to get this working, and I'm constantly getting knocked back to square one. Frustrating!

More Thinking about FLARBuilder

I've noticed that in the example I'm working with, when a marker is removed, its associated object is deleted from the scene. Specifically, some irrelevant stuff, the code is:


var container:DisplayObject3D = this.containersByMarker(marker);
if(container){
this.scene3D.removeChild(container);
}
delete this.containersByMarker[marker];


So, the container (a model) is removed from scene3D (what I currently believe is what contains a 3D scene to be rendered each frame), and then... a container is deleted?

I tried commenting these lines out to keep a 3D model in the scene while its marker disappears, but what actually happens when the marker comes back is another instance of the model is created, leaving the old one sitting where it was. It gave me an interesting idea that AR could be used for quick, relatively intuitive "additive" 3D modelling. This is because whenever a new model is added to the scene, it can intersect other models already in it. So, for example, you could define a set of useful primitives associated with given markers. You could then have a "button" marker that the user can cover to trigger a primitive's marker to drop its model in the location it's currently at.

Rotating a scene set up like that, however, would currently be a mystery to me unless I can figure out how to associate models with other markers, and lock their orientation relative to that marker (otherwise they'd be automatically aligned with it).

In any case, that's just an interesting aside. My next immediate task is to find out how to permanently associate markers with given models, so when you cover a marker, its model stays in that position until the marker is found again.

22.1.11

Further Progress, Now with FLARBuilder

For anyone reading who doesn't know, FLARManager is what's called a "wrapper" for FLARToolkit's API. That means it basically makes it nice and easy to work with FLARToolkit, giving you a bunch of pre-made things. Sort of like if you were given all the parts that can make a car work (FLARToolkit), and then someone decided to put most of it together for you (FLARManager) so you can focus on learning to drive and pimping it out with extra stuff.

I was told a lot of useful stuff about FLARManager by Jacky Yuen, so it's given me a good footstool of knowledge to work with. I started not able to get even the basic FLARManager stuff compiling due to Flash player errors (I wasn't using a debugging version of it in Google Chrome). To get it working in Google Chrome, I followed the tech notes they have available here. In case that link changes, an outline of what to do is this:

To install and use an alternate version of Flash Player:

Download and install the appropriate system plug-in. This plug-in could be a debugger, pre-release, or release version of Flash Player. (Archived release versions of the system plug-in are found here.)
Type “about:plugins” (without quotation marks) into the address bar at the top of a Chrome browser window.
Click “Details” at the upper-right corner of the page.
Find the “Flash” (or “Shockwave Flash”) listing for the integrated plug-in on the page and click the corresponding “Disable” button. To identify the integrated plug-in, see the table of plug-in filenames above.
Find the “Flash” (or “Shockwave Flash”) listing for the system plug-in on the page and click the corresponding “Enable” button. To identify the system plug-in, see the table of plug-in filenames above.
Close all Chrome windows and restart the browser.

To re-enable the Flash player once you're done, you just need to go back to the "about:plugins" page, look for the Flash / Shockwave Flash plugins you disabled, then click "enable" for each of them.



After a bit of fiddling, I managed to figure out how to get some cubes showing up on various markers. Currently, I'm testing things using Flash Builder, since I finally got that working. Now the projects compile much more quickly compared to before. Flash CS5 would take the better part of a minute or two before letting me see what I'd made, whereas Flash Builder is only a few seconds, and lets you preview the Flash application in your browser.

The stage I'm currently up to is getting multiple (identical) .dae files animating over different markers. I've found the relevant OnMarkerAdded() and OnMarkerRemoved() functions, and I successfully modified two scripts to change from showing multiple cubes to showing simplified models of Team Fortress 2's Scout (which came with FLARManager).

Left to do:
- change models depending on what markers are being hidden / shown, so those markers effectively act as buttons being "clicked" by a user when they cover them up
- animate a 3D section of a building, and a few 3D plans
- see if I can detect whether or not a given animation is finished (so I can have transition animations between different states; ideally, I'd have a "neutral" state for the building model, then removing a marker would animate it back to a "neutral" state, after which it would animate to the state indicated by the removed marker)
- add cool animations to be shown over the tops of markers to act as buttons (hopefully semi-transparent textures are supported by PV3D's renderer); things like "Show Section" and "Show First Floor". I'd like to make it look like an interactive 3D menu is popping up over the marker when the user touches it, so hopefully that will be achievable within the timeframe I have left (3 days...).



The main issue left is deciding what kinds of options to provide the user when interacting with the application, taking into consideration how easy it is to model given things to avoid choking up what should be a simple, quick proof-of-concept process.

Online .pat Generator for Markers Made by Hand

I think one of the tutors mentioned this already, but I only just found the resource: there's a Flash application online (found here) that gets your webcam input, recognises anything it thinks is contained in a marker square, and lets you pick them to export as .pat files. The resolution of the files is pretty low, though.

However, the benefit of it is that you can simply print out a few empty markers, then scribble in them and then use this application to generate the .pat files for you.

I was using this for a while during the day, but then when the sun set and I had to use artificial lighting rather than daylight, I noticed the markers stopped being picked up because the .pat files weren't accurate to them. Rather than lowering the confidence of the marker detection needed to detect a marker (it was already at 50%), I decided to try something a little tricky. BuildAR can generate .patt files from .bmps and .gifs. I took a guess and hoped that BuildAR was developed from the same base as FLARToolkit and used BuildAR to create a marker, then used that in FLARToolkit. Lucky for me, it worked! Now I've got much higher-detail .pat files to work with.

I should also mention here that I had a lot of trouble getting FLARToolkit up and running. This wasn't due to FLARToolkit, but was actually due to the fact I was using Adobe Flash CS3 to try compile the example files. After downloading Flash Builder, FLEX 1.4, and Flash Develop to try get things working in CS3, I finally gave in and downloaded a trial of Adobe Flash CS5, which now works with no issues whatsoever, and I was luckily able to install it without uninstalling any of CS3. I'm fairly sure that only CS5 can compile them properly, though I wouldn't be surprised if there's a chance that CS4 could handle them, too.

Thanks to a lot of help from Josh and Jackie, I've gotten my head around how the Flash IDE works. Tasks currently still ahead of me are:

- get a 3D model other than a generic cube showing up (I thought I was having trouble with it, but now I'm supposing it's due to lighting issues)
- get an animated 3D model showing up and animating
- get a model on marker A changing state (eg, what model is being displayed) when I cover / uncover marker B, thus turning markers into real-world buttons

21.1.11

Overcoming Some Issues with FLARToolkit Being in Japanese

I was trying to find out how to make markers for FLARToolkit by looking through its files, and I found something in .odt format, which I realised is an OpenOffice format. I opened it, and saw everything was in Japanese. Given that everything was formatted nicely with tables and images, it would be an arduous task of copy+pasting chunks of text over and over into an online translator if I wanted to know what text went with what image. However, I came up with a better solution: Save As .html, then open it in Google Chrome, which automatically detects foreign characters on the page and offers to translate them. Thus, the page would keep its formatting (in html), and I'd be able to save the page from Google Chrome in its translated form.

I think I'll be able to do a similar thing with some of the ActionScript files, too. This should help tremendously!

18.1.11

More FLARToolkit Stuff I Found

How to create a "box head" that opens and closes its mouth as you talk:



The video goes through what you need to download to do what they're doing, as well as covering some basic code they used. I'm suspecting that Flash itself supports retrieving mic input, whereas FLARToolkit's libraries are used to render the box heads. With a few more really helpful search results like these, assignment 3 should hopefully get completed without any hitches.

17.1.11

Figuring Out Assignment 3

For assignment 3, I was wanting to get into the more technical side of augmented reality. I want to try out something called FLARToolkit to make an interactive model. The idea I have is to have a sheet of paper with several markers on it. One main marker will position the model to be animated. The other markers will be smaller and in a row, and will let the user control the state of the animation by covering or uncovering the markers.

Josh was a lot of help getting me started with FLARToolkit. Though I've got a lot of experience with programming (Basic, Python, C++, Javascript, etc), I'm new to ActionScript. Hopefully my familiarity with programming generally will help speed the learning process. Also, FLARToolkit has a lot of high-level functionality that seems to handle what I want to do. The main thing for me would be to attempt simplifying the process for myself, and perhaps generalising it to make the resulting Flash program able to have different models loaded into it while maintaining the same functionality.

As I find / figure out useful scripts, I'll post them on this blog.

Some links I was given by Josh to look at are:

3D Objects Interacting with each other (Google Groups)


Question on Multiple markers as a trigger...



Also, yay, API documentation for FLARToolkit.

Issues Uploading to Emustore

For some reason, I couldn't upload my assignment submission to Emustore (all attempts ended up being 0 bytes in size), so as an alternative, I'm uploading it to mediafire.com and am posting the links here.

The version with many extra files in it (intended to give a clearer idea of what I was trying to do with the assignment until I had to scale it down due to BuildAR crashing)
The version with the bare essentials in it (should work just fine, but doesn't show as much of the high degree of structure I was hoping)

Assignment 2 Video

Note: The video is currently being uploaded to Youtube. It would have been captured two hours ago, but I encountered a lot of issues and crashing due to trying to get BuildAR running with 18 medium to high poly models loaded. In the end, I dropped the number of models to something like 6 or 7, which still produces a significant drop in framerate but at least doesn't crash BuildAR.

Edit 1: Video finished uploading.



Edit 2: I made a simpler model and recorded again so I'd have a video up without a horrible framerate.

Design Idea in Progress

My design idea in progress:

For my submission, the requirement of having multiple markers is met through having one marker that can be modified to allow the user to view what they perceive to be the same model in different ways.

The workflow resulting in the final marker was as such:

A normal marker looks like this, and represents only one model in BuildAR:


For this project, I had the idea of being able to break up a marker into pieces that can be replaced to give the appearance of change in the viewed model:




The idea refined to a larger marker (double usual size) to help with handling its pieces. Note the holes in the back of the marker that let me poke the pieces out instead of potentially damaging them by picking at them with my nail until they come loose.





However, the first try had too many fiddly details for BuildAR to easily distinguish from one another, so I simplified it to this:




The idea was refined somewhat by considering the marker interacting with BuildAR as a "finite state automaton" controlled by the user. The idea works by setting the "state" of the marker, and letting BuildAR react accordingly to that state. For each state you want BuildAR to react to, you need a different model associated with a marker image in that given state. Unfortunately, since this means you need several different models loaded into BuildAR, this idea can very quickly start to take up a lot of RAM.

15.1.11

Assignment 1 Post

(Note: This is a post made using bits of previous posts to fit the requested layout.)

Title of Project: Control Freak

Description:
I'm an avid gamer, so controllers rule my life. It all started with a NES, so I wanted to pay homage by using new technology to capture and replicate its old technology. In a technophile's version of an archaeologist creating a plaster cast of a dinosaur's bones, I want to make a 3D printed 'cast' of a NES controller. While working on it, as with gaming, I'm a perfectionist and completionist. As such, I'm incorporating the words 'Control Freak' in the model, and will subtract an accurately modelled NES controller. The result is intended to be a replacement controller body.


Comparison of Real object and 3D reconstruction 1


Comparison of Real object and 3D reconstruction 2


Render of my reconstructed object


Render of my reconstructed object


Render of my reconstructed object



Supporting assignment material:

Timelapse video of me making the model to be 3D printed:

14.1.11

Mesh Unfolding Plugins for Sketchup

I'm just posting these links here so I remember to give them to Josh this Tuesday.

An automated mesh unfolding plugin (adds tabs to mesh so it can be printed, cut out, and glued together), comes as a 15 day trial version or paid one-year-license version:
Waybe Plugin

A free unfolding plugin, but you have to manually select faces:
Unfold Tool Plugin

Timelapse Recording of Me Making the Model to be 3D Printed

I finally got around to compiling and editing the timelapse images I captured with Chronolapse. I put the video to some music called "One Girl in All the World" by djpretzel, a link to which you can find in the video description on Youtube.

13.1.11

Yes, Yes, and More Yes.

Augmented Reality with Unity3D and SSTT:



I've been making a game in Unity3D for a competition lately, so seeing augmented reality and that combined is a really cool idea. I wonder if it's ok to submit a game that uses augmented reality... It'd be pretty damn cool, I'd say.

Also, here's a video I captured of me toying around with BuildAR, using part of my soon-to-be-3D-printed NES controller model:

11.1.11

Augmented Reality Videos

Here are some videos of augmented reality I found on Youtube that caught my imagination.

Augmented Reality Game for the iPhone:


Augmented Reality Game using the NVidia Target Dev Kit:


Interactive Augmented Reality (Touch Parts of Real World to Trigger Bone Animations):

9.1.11

Inspiration Images / Sources

I should mention where my idea of making a new case and buttons for my NES controller came from. In the past, I've seen several instances of people modding their old consoles to have new parts, or so they serve new functions. The first one that stood out to me was someone who modded their old NES so it became a properly functioning PC, with a disc drive in the cartridge slot, and other ports swapped out for the typical ports you'd find on a PC.

Some mods I had heard of in the past before looking more into it for this subject were a SNES toaster (the Super Nintoaster), a NES controller being stripped out and used as an iPod Shuffle, and sundry case mods. After a little research online, a ton of sources with people doing mods of their old console hardware turned up. To narrow down the results, I focused on Nintendo-related modding that caught my eye.

Here are a few images from the searching, including the URLs they were retrieved from (listed before each set of images taken from that URL).

An iPod Shuffle Mod of a NES Controller (http://everyjoe.com/technology/nifty-nes-controller-ipod-shuffle-mod-130/)


The Super Nintoaster (http://walyou.com/super-nintendo/)


NES Modded as a PC (http://www.geekologie.com/2009/06/im_on_to_you_snes_actu_a_pc.php)


Entire NES System Fitted into NES Cartridge, Ports and All (http://gizmodo.com/382452/nes-cartridge-modded-into-nes-system-space+time-remains-intact)




NES Cartridge Modded to be a Handheld System (http://walyou.com/portable-nes-cartridge-console-mod/)




NES Modded to be an Alarm Clock (http://www.infendo.com/nintendo-spotting-pointless-nes-mod-edition/)


I love the idea of revamping old technology to take on new functions, as well as reinterpreting old technology in new ways to get fresh enjoyment out of it, which is really the main reason why I made my project the way I did.

3D Model Finished-ish

I've got a 3D model completed that should be just fine in the 3D printer. I had a few confusing issues with holes appearing in my models after performing a few Pro Boolean operations in 3DSMax, but using the "Cap Holes" Modifier fixed them in most cases. The one instance it didn't fix the issues was when one solid had a double face, which I eventually realised I could delete (along with a resultant unconnected edge).

My main hope is that the buttons are in the right positions, and that they will sit at the right heights to be able to press the rubber contacts below them. It was a bit of a battle to get all the measurements done accurately for the different relevant parts of the NES controller's circuitboard and rubber contacts, but I eventually figured it out.

Here are some 3D "renders" of the model...

An angled perspective view of the controller with the buttons and D-pad placed in the top half:


A top view of the controller with the buttons and D-pad placed adjacent to the top half:


An angled perspective view of the buttons and D-pad on their own:



One thing I didn't know how to show in the renders was the fact that just underneath the lettering on the buttons, I subtracted the same lettering again, since that would be something that isn't possible for an average designer to achieve to the same quality in any medium other than 3D printing. I'm hoping that it will show up when the controller is looked at under bright light.

Now to crunch some numbers. Using Jeremy's approximation that printing a 10cm cube costs about $100, noting that a 10cm cube is 1 000 000mm^3, and noting that my model fills a rectangular prism of about 300 500 mm^3, the cost for my model should be (300 500 / 1 000 000)*$100 =~ $30.

That's a pretty good price for a funky new NES controller case! :D



Currently, I'm wondering if I could make use of the remaining space in the aforementioned rectangular prism. The case is mostly hollow, so something like a simple chain could fit in it, so I could well make a necklace or bracelet or something. If I can't come up with something reasonably soon, I'll just submit the model I have and leave it at that.

8.1.11

Thinking Through What to Model

I was originally intending to create the 3D model so it would be able to have screws put into it and also have the circuitry sitting in place easily. However, looking into the feasibility of doing that, there are a lot of places I could make a small error, resulting in the entire thing not fitting together. The tolerances for the screw holes, for example, would probably be less than I am able to manually measure, quite possibly resulting in a hole that is too tight or too wide. A tight hole could be tediously sanded down until the screw fits, but that would be rather time-consuming, and could still potentially result in a poor fit.

For an idea of the complexity of what would need to be modelled, here's a photo of the inside of the case:



So, instead of having screws and such, I think it would be easier for me to do the following:
- tape the back of the cord to the circuitboard (it should normally be secured by weaving between some parts that poke out near the cord's exit hole),
- make the inside of the case intentionally too deep for the circuitboard to be able to sit easily with the buttons on top; this way, I'll be able to pad it with, say, folded paper until it is precisely in position.

This should save me a few hours of modelling non-aesthetic details, and reduce the probability of the model printing incorrectly.

Photogrammetric Comparison

Here are some comparison images to show the accuracy of the point cloud generated by the Photosynth I picked for the final 3D model. It's a little hard to see, but the buttons had a little bit of trouble standing out in the surface reconstruction. Upon inspecting the point cloud, however, it looked like they were captured with appreciable accuracy. The issue therefore is that the mesh generated by the surface reconstruction was slightly smoothed out, rather than being more jagged. The corners of the point cloud are extremely accurately positioned, which is very clear to see when looking at the Photosynth.