Hi all,

I wasn’t sure exactly what we decided to do next Wednesday, but if we want to continue the last discussion we had, I recommend we read the next section in the Terranova book (on network time). Al suggested we could also discuss my paper on Simulation and Post-Panopticism (uploaded in a previous blog post), and I’m happy to do this too.

See everyone next week.

Bill

Hi all,

here is the Terranova book via WordPress! Please read Intro and 1st section.
Terranova – Network Culture.pdf

I think I uploaded the wrong thing yesterday (still thinking of Al’s toopology talk last week.  I actually wanted to provide a link to a paper I wrote on surveillance for the Routledge International Handbook of Surveillance Studies.  The title is simulation and post-Panopticism.  It’s amid a couple of other files in the folder below if you’re interested.  I actually published a book on surveillance almost 14 years ago, so the readings for tomorrow were very interesting.

https://public.me.com/bogard

apropos of tomorrow’s meeting with Justin:  The Jonah panel of the Sistine Chapel, a remarkable expression of pictorial depth.   Painted on the curvature of the ceiling,  Jonah appears to lean back in space even thought the actual ceiling curves forward (down), and his legs appear to jut forward even though the ceiling move back (up).  Michaelangelo’s understanding how depth is a surface effect was amazing.

 

http://www.backtoclassics.com/gallery/michelangelo/sistinechapeljonah/

today we talk about space from a visual perspective, and one computational challenge is how to design virtual space to give it depth, to be able to move around in it.  Haptic space involves the problem of how to give the feeling of movement and interaction in virtual space, e.g., how to supply virtual objects with texture, weight, pliancy etc.  Also how to convey kinesthetic and proprioceptive feelings to the act of maneuvering in virtual space…  here’s a brief video of an early experimental technology

An example of how the passage of “pure time” is captured for the first time in cinema (a clip from Ozu’s “Late Spring” in which the filmaker captures a “becoming”, but the form of becoming remains the same.

The 2nd life topic for today made me want to examine again what it might mean to take the “machine’s point of view” when it comes to the evolution of virtual worlds. Does it make any sense to ask how humans are seen from the point of view of their avatars, what kinds of “machinic” intelligence, perception, affection do avatars have? can we imagine a world where the avatars view us as so many catalysts to their evolution and reproduction… at what thresholds do avatars discard their human supports and develop means to program and think for themselves? Is such a scenario impossible, or is it happening already in some form (e.g., self programming neural net research). Philosophers like Hubert Dreyfus argue that it is impossible for “smart machines” to ever develop human intelligence, capacities for empathy, feeling etc. But more and more, perhaps, we are pushed to assign something like “real” intelligence to technologies. Many human decisions these days only occur after consulting simulation models devised by machines (the pentagon has long been working on such automated decision capabilities for war technologies).

Just the thoughts of someone who has read too much about this stuff 🙂 Bill