Wednesday, May 30, 2007
BoingBoing had a post linking to a flickr set of a wild melted keyboard resulting from arson. In a related, and intentionally driven sort of affair, a 419eater rescammed an email scammer into making elaborate wooden carvings of strange objects, such as toys and a Commodore 64. Originally read of 419eater.com in the Atlantic, though you need a subscription to read the full article...great one though.
Monday, May 28, 2007
It seems that Jeff Hawkins' book, On Intelligence, and the goals he set out within it are finally making a splash. Posts on Slashdot and BoingBoing, as well as stories in the NYT (you know, subscribe) and other outlets are reporting that Hawkins and fellow Palm entrepreneurs Donna Dubinsky and Dileep George have finally founded Numenta, which is going to attempt to implement Hawkins' theories from On Intelligence.
Last semester, I was fortunate enough to read this book with a group of Cognitive Science majors, at the behest of the all-knowing Prof. Broude, for the Cog Sci book club. Alongside some more hearty works, such as William Uttal's The New Phrenology (recommended to destroy your hopes of localizing brain processes) and Lakoff & Johnson's Philosophy in the Flesh (recommended to destroy your love affair with just about all philosophy post-Aristotle), Damasio's The Feeling of What Happens (just plain recommended), On Intelligence was a bit of a breeze to read (thanks in part, no doubt, to bestseller-maker Sandra Blakeslee). But it did set out a fairly radical approach to understanding cortical function and how it underlies intelligent behavior. Hawkins' theories, I believe, fit in as the next step from the work of Joaquin Fuster, Gerald Edelman, Vernon Mountcastle, and some others. They all emphasize the distributed, network nature of cortical processing, the role of feedback, and the importance of memory as a key cortical process. In my humble opinion, the work of these guys pretty much represents the future of brain research, and, as Hawkins is showing, perhaps the future of AI as well.
So, nobody tell the MTA that there is free wireless in GCT. Even though I just told you. By tracks 114 and 113 in the food court. Awesome.
A note to my drug-induced (read the post) comments yesterday. I've cheated, and I've been reading Damasio's Looking for Spinoza rather than the Gary Marcus book, which it turns out is deeply related to Gladwell's latest (Gladwell cites Damasio and talks about Descartes' Error a fair bit, and Damasio uses the work of Ekman, who Gladwell tipped me off to initially). Anyway, like any good cognitive scientist, Damasio manages to weave drugs into the discussion of feelings, emotion, and the brain. And what should he cite as the best resource on drug experiences? Why, Erowid, of course.
This, along with Dennett's extended rant on hallucinogens at the beginning of Consciousness Explained, as well as my own, ahem, experiences simply strengthens my claim that, like the buzz-driven computer science of the 60's and 70's, cognitive science is the domain of drugged up and/or semi-psychotic intellectuals with too much or too vivd mental imagery... Well, this is mostly conjecture, although this article tosses in a number of shady references, such as how Timothy Leary's daughter took up programming (before she shot her boyfriend and hung herself in jail...hmmm).
But some might take issue with this idea, so I'll just limit the extent of that claim to myself and, oh, nearly every cognitive science student I've known at my darling home institution. Although, cog sci, drugs, and mental illness do not always fall in the same boat, as everyone at my home institution would thus qualify to study the mind; however, there may be a case for describing all humans as cognitive scientists. But my train is leaving in 9 minutes, so no time to defend the validity of that particular radical claim.
Thursday, May 17, 2007
This is reminiscent of many examples throughout cog sci of attributions of agency, or at least intentionality (and thus leading to the development of concepts of personality and attachment to same) to inanimate or unconscious entities (there is some research on whether god concepts form in this way). A favorite example are experiments involving describing a scene in which shapes move around and appear to interact, and participants reporting "his" and "her" interactions on the screen. A more interactive version is the robot simulation software BugWorks, which I recall having to work on before attempting more elaborate physical implementations. when multiple bugs were interacting, it was almost inevitable that we would attribute intentionality to the interactions, especially such behaviors as chasing (seemed like mating to some). I think this raises a question - is the difference between object anthropomorphization (e.g. cars, ships, etc.) and bonding with robots the fact that robots display an increasingly apparent degree of goal-directed (and depending on the definitions and implementations, perhaps intentional) behavior? Such behaviors appear to have special resonance in humans, and are probably some of the basic elements of human social bonding. As robot movement and appearance becomes more lifelike, i think it will be inevitable that people treat them more like equal agents. And this is a very big deal, as even iRobot is getting into developing military grade field robots that may be deployed in very large numbers - the "robot stock news" blog has some very interesting information about the development of, and soldier interactions with, the iRobot PackBot. Looks like some very interesting times ahead.
Oh yeah, today I got the Scooba I ordered last week. My grandfather, who is 83, was so excited to say "I have a robot" that I'm afraid he's going to be too cool for school when he goes back to Boca and says to the old folks, "remember the Jetsons?"