Sunday, April 29, 2007

Evolution on multiple scales


I really liked the part of emergence that compared cities to brains. I’ve always wondered about this type of concept: what sorts of things can experience evolution? In an earlier part of Emergence, the author stated that anything that reproduces imperfectly and is faced with conditions that create a struggle for resources can evolve. So, one could then say that, for instance, our cells experience evolution. An individual cell needs energy, which it gets from the food broken down in our bodies, to survive. Also, our bodies only get a finite amount of energy at any given time, so one could imagine that some sort of competition exists between our cells to get a hold of this energy. After all, a cell that is better programmed to get at the food it needs is able to survive longer, and replicate (except in the case of brain cells, I think). So why don’t the cells in our stomach, which get the first dibs on food, selfishly take it all for themselves? It better ensures their survival, at least in the short run, to not share. So why do they share?

You can’t answer this question by examining it at the level of the cells themselves. You have to look at the larger structure: the human being. I have no idea how these cells got together and decided to form this larger structure, but once they did, it became in their favor, in the long-term this time, to not be selfish. If our cells all rebelled against the system and tried to hoard all of the food for themselves, we would die, which would mean they would die. So we have natural selection, and evolution, happening on two very different scales here. And it is obvious by the fact that we can walk and talk that the larger scale predominates. Why would the larger scale predominate, and what does this imply about even higher orders of evolution?

I imagine that higher orders of evolution predominate over longer time scales. Natural selection acts over the scale of generations. Generation times are much much shorter for cells than for humans. So, one could theoretically expect cell evolution to be important over shorter time spans, time spans we don’t care about. But over the time scales that matter to us, our evolution wins out. Now, what about higher time scales? Eventually, if the inhabitants of a city run rampant, without regard to a higher social order, the city will be destroyed, or at least, not worth living in. One could say that there is an evolution occurring amongst cities. They compete for resources, and those that compete best survive the longest. One could say that only those with “smart inhabitants” (analogous to “smart cells”—ones that aren’t completely selfish) will survive and prosper. One problem I see with this idea is the following: do cities really meet all the conditions for evolution that Dawkins described in The Selfish Gene? Specifically, in what way do cities evolve? It could just be that cities have an indefinitely long generation time, so the question is not “Who can best reproduce” but instead “Who can live the longest?” I don’t know. What do you think?

Sunday, April 22, 2007

Tank, no tank, tank, no tank...



This book looks awesome. I was particularly interested in the idea that programs can evolve in the same way living things evolve through natural selection. The idea came from Dawkin’s The Selfish Gene, in which he stated that in order to see evolution by selection, all you need is variation in a population, a way for members of the population to reproduce, and for them to reproduce imperfectly. If you then have a way to selecting force (such as limited resources) for “fitter” attributes, you can drive evolution. The idea that you can do this with computers sounds so cool. You can model situations that are so mathematically complex that it would be very difficult to find the best possible parameters using your own logic. For instance, say you want to build an ideal airplane wing. Now, the aerodynamics of those things are a bitch to figure out, so you can’t expect to figure out the best design through your own calculations. So what you do is write a program that takes a bunch of different airplanes with different types of wings, and it makes them fly. Then, it takes the ones that fly the best and it lets them reproduce in such a way that the planes’ “offspring” are similar, but not the same as the parents. You repeat this process over and over, and then in the end, you are (theoretically) left with airplane wings that fly really well. You then look at the parameters of the wings, and you build your real wing based off those parameters. It sounds genius. It is, however, very complicated, and can produce unintended results, as my engineering-major boyfriend pointed out. Consider this:

The US military was trying to find a way to be able to tell whether or not a tank was hiding in some particular area. So, they thought they might apply this idea of evolving programs to design a program that could learn to read a picture and figure out whether a tank is hiding in it. So what they did was go out one day and take thousands of picture: tank, no tank, tank, no tank, etc… They came back, and they fed these pictures to the program. Now, the program started out really dumb—it had no idea what was a tank and what wasn’t, so the program initially guessed randomly whether there was or wasn’t a tank present. So in making it’s predictions, the computer messed up a lot at first. But the programmers were patient. Every time the computer messed up, the programmers let it know, so it redesigned its schema to better fit the definition of “tank” vs. “no tank”. Eventually, the computer was able to perfectly tell which pictures had tanks and which did not. To make sure the computer was not just memorizing the pictures, they saved a couple that they did not show to the computer until the end, and it managed to get those right as well. So, the military was very happy because they thought they had made the computer learn the difference between tank and no tank. Then, as a final test, they went out and took some new tank & no tank pics and showed them to the computer. They were unpleasantly surprised: the computer had gone back to being dumb again—it got just as many wrong as right. What happened? Why could the computer figure out the first group of images but not the second? It turned out that in the first group of images, all of the “tank” pictures had been taken in the morning, but all of the “no tank” pictures had been taken later in the day. They hadn’t taught the computer the difference between tank and no tank: they had taught it the difference between day and night! Just a reminder of how complex these things are…

Monday, April 16, 2007

HTML 2 class

I attended an HTML 2 class a while back, and I was just too lazy to post about it until now. It was pretty interesting; I learned how to make my web page valid and all that good stuff. The most interesting thing was learning about how HTML is basically just defining the layout of the page and for telling google spiders and such the main parts of the site, which is used to generate keywords. In terms of the actual content, one then uses other services, such as Dreamweaver and Fireworks. The class will hopefully be useful with respect to my final project; I am a bit more comfortable with the HTML code so I should be able to edit the source directly when I need to.

Sunday, April 15, 2007

McDonalds makes people unhappy? Please!



On Thursday, we began watching Lost in Translation. We were supposed to look at it through the frame of Augé’s nonplaces. According to Augé, a nonplace is a space that lacks connections through history or identity. People in these nonplaces are using the spaces simply as means to an end—for instance, an airport is just used for transit. A Starbucks is used to get coffee and get the hell out of there (at least, it is for many people). For Bill Murray’s character, Tokyo was definitely a nonplace. He didn’t want to be there; he was just there to make a quick buck filming a commercial. Whether it was a place or a nonplace for Scarlett Johansson’s character is a bit less clear. She had no obvious purpose for being there; she was just following her husband for his work.

A possible argument one could make with respect to the film is that the nonplaceness of Tokyo adds to the characters’ feelings of sadness and being alone. The city is very commercial; the characters move around it, seeing all of the people and all of the lit up signs, without really understanding what is going on—they don’t know the language or the people. One could argue that globalization and commercialism isolates the characters further and takes away feeling. However, this is not necessarily true. For example, when Scarlett Johansson’s character visits a Buddhist temple, she later calls her friend (or family member) and tells them that she felt nothing in the temple. If a Buddhist temple is not a place by Augé’s definition, I don’t know what is. One could argue that the effects of globalization were lasting on her, and rendered her unable to feel even when she left the nonplace. That argument seems grossly unfair, however. If one could use that argument, then how could one ever possibly disprove the hypothesis that globalization leads to these characters’ malaise?

Ignoring globalization, one can develop a reasonable theory as to why Scarlett Johansson’s character is unhappy. First of all, she is a recent college graduate with a major that offers little opportunity to find a job that is as intellectually satisfying as the field itself. It seems a little strange, given that the character seemed quite intellectual, that she did not continue on in graduate school intending to do academia. Now, I’m not a psych major, but I am inclined to say that she is very much not self-actualized. First of all, she is only 22 or so, and she has been married for 2 years. If my math is correct, that means she got married when she was a sophomore or junior in college. Now, I know this works fine for some people, but it is incredibly dangerous to get married at such a young age if you don’t know who you are. Judging by the fact that she doesn’t know what she wants to do with her life, I would guess she doesn’t know who she is. So, getting married at 20 was a bad idea. And she seems to be thinking so herself, based on her phone call to her friend in which she said “it’s like I don’t even know the man I married.” No wonder she seems unhappy—and it doesn’t matter if she’s in a Buddhist temple or in the biggest McDonalds in the world, she’s still going to be plagued with these problems.

Her attitude in this, and the overall tone of the movie, reminded me a lot of a poem by Charles Baudelaire called Spleen. In it, he expresses feelings of ennui and unhappiness with no obvious origin. The narrator in this poem says it is Nature herself who seems to be pressing upon him and causing his unhappiness. Now, to be fair, he had no globalization to blame—this was written in the 1800’s. I’m just using this as an example of my overall theory (which agrees with what I think John said in class) that people make their own unhappiness. They just also look for something convenient to blame.

Tuesday, April 10, 2007

The relativity of place



I was walking on good old State Street this morning, and I saw a girl walking from Starbucks back to the Towers, carrying a grande no-whip something or other. She was dressed in a very coastie manner, sunglasses and all, looking straight ahead at her destination and ignoring all those around her. State Street, at least for her, is certainly a non-place. She was using it simply as transit between her apartment and Starbucks. I figured State Street would be worth further study as a non-place. So, I returned later today to see what I could find. What I found did not at all support my hypothesis that State Street was a non-place. I saw people sitting around with their dogs, talking to each other and enjoying the above-freezing temperatures. Scanner Dan was there as well, offending some group of girls near Einstein's. It was too cold for the Piccolo Guy or the crazy sci-fi spraypainter, but from what I saw, it was pretty clear that State Street has a definite culture, years in the making. I would consider State Street to be the heart of Madison, and I love it. So it is impossible for me to see State Street as a non-place. However, I would say that it was a non-place for the coastie girl was pretty undeniable. So, I have to conclude that place vs. non-place is really just relative. It depends on who is using it, and perhaps just on the day. Maybe the coastie girl was just feeling antisocial, and would normally treat State Street as the place it deserves to be treated as. I myself have sometimes treated Espresso Royale as a place, and used it to meet with people; I have also occasionally used it as a non place, and just gone for their good chai. Looking at it all from a post-modern point of view, you can't say anything absolute about the place itself--it is only how people interpret the place that defines it, and that can change from day to day.

Tuesday, March 27, 2007

What in the world should I do with my life?

Ok, so I'm now going through a dilemma that I'm sure is not too uncommon: I have no idea what I want to do after college. I am a junior planning on 5 years, and I am majoring in math, physics, and french. My original plan when I came to college was to go to graduate school in physics. I've recently decided that I do not think physics graduate school is right for me. This is for two main reasons: 1. I haven't been thrilled about the research experiences I have had; they were not all that exciting to me. 2. I have recently become very interested in medical sciences. When I hear about some new study with the brain, or the HPV vaccine, or any new medical development, I get very excited.

So, I was thinking of medical school as a possible option. However, problems with this include 1. I may be more interested in the research aspect of things, so getting an MD makes slightly less sense than getting a PhD. 2. Med. school is hella expensive, and although I have heard you can easily pay off the loans on a doctor's salary, I would want to do pro bono work in Africa or somewhere like that for a while once I get done with med. school, but I would be flat broke :(

I considered an MD/PhD program, which is good because 1. get the MD and research, and 2. It's free! However, it is impractical because 1. it is nearly impossible to get into, and 2. it takes about a million years to finish.

Also, medical physics does not seem like a great option, because it seems somewhat limited in scope (meaning if I got a PhD in it, I'd be working with radiology machines my whole life.)

So, I am opening my future up to the class. Anyone here interested in medical research? What are you plans? Any suggestions?

Monday, March 26, 2007

How to make a person


There are several new technologies that raise some interesting questions, such as: what does it mean to be human? How do you define consciousness? For instance, if Dan and Steve go under the knife, and Dan gets Steve’s brain, and Dan’s brain and Steve’s body both get chucked, who is still alive—Dan or Steve? We’ve not come quite that far in technology for this particular question to be particularly relevant, but it can be thought of as an extreme example of the sorts of questions that are relevant.

First of all, millions of people use antidepressants and electroconvulsive therapy in the treatment of depression. Many people, depressed and non-depressed, are a little freaked out by these things and would hesitate to take them, because they are afraid that the drugs or the ECT would “change” them. I think this is a really interesting idea that a lot of people I have talked to seem to share. People seem to think that by using a drug or something that alters your brain chemistry, you are making yourself a different person. I think Phineas Gage is a good example. He was a construction worker who got a tamping iron literally driven through his skull, which caused major damage to the frontal lobe of his brain. According to friends and family, Gage was a totally different person after the accident. It makes you wonder what the change seemed like to Gage: when he got the iron driven through his head, the Phineas Gage everyone knew and loved died. Did the Phineas Gage that Phineas Gage himself knew and loved die as well? If you change your personality such that you become unrecognizable to the world, are you still the same person? I am not going to claim that taking Prozac is the same as getting an iron driven through your brain, but in the sense that they alter the way your mind works, the two are similar.

Another interesting question can be raised about the increasing power of artificial intelligence. As a neat example, check out Kismet, a robot built at MIT that was made to simulate social interactions. He physically responds to people much like other people do. For instance, he has a range of stimulation in which he is happy. If nothing much is going on around him, he will get bored. If someone walks into the room, he will appear interested and look around. If you wave something right in front of his face, he will display fear and back away to maintain his personal space. Watching videos of Kismet, I thought he was the cutest little thing and I fell completely in love with him. He was like a puppy or a little kid; he certainly seemed real to me. When we can create robots that can elicit this sort of emotional response from people, we have to start wondering to what extent the robots actually are real. After all, we are ourselves bio-electronic machines. We can not empirically see any physical soul anchored in our bodies; therefore, how can we really say we are any different from robots? And when we see Kismet respond in interest or in fear, can we really say that what he is feeling is not real?

Also, on a less philosophical note, I played around a lot my freshman year with various chatbot programs—programs designed to chat with you like a real person would. My favorite was Billy—a bot who started out with an extremely limited vocabulary, but who built up his sense of vocab and grammar based on your interactions with him. Here are a couple of quotes I managed to get him to actually say:

Billy: “When I was younger, I had a tappin’.”


Billy: “Actually, my girlfriend has my mother.”

Erin: “You sick bastard!”

Billy: “My mother used to tell me that!”


Erin: “Will you be my omega?”

Billy: “Not for a million dollars!”

Monday, March 19, 2007

Real-world holodecks


The “Identity Crisis” reading had a very interesting example about a woman who became disabled after an accident, and used a character she created over the internet to help herself learn to live with her disability. I was quite surprised by this example because I feel it goes against the norm when it comes to creating characters over the internet: hers is a case of confronting the realities of her life head-on in a manner that is safer than through the real world. I felt like I should at least mention her example because it is a very strong counterexample to what I’d like to talk about here: the use of the internet for escapism. For instance, there is always the guy who spends more time playing World of Warcraft than studying. On Facebook, you can waste hours reading about the (real or unreal—who knows?) facets of the lives of people you never see anymore. Second Life, an internet community that has over four million accounts, allows users to trade actual money for fake money that they can use in the game. Think about that: you can take the money you earn after a hard day’s work and spend it on improving your character’s life—not your own. Over four million accounts?? What are these people getting out of this?

The concept really isn’t that new. Consider Walden, written way back when in the 1850s. Thoreau gets so sick of the society he lives in that he runs off to the woods and writes about how great a time he has. Before kids had video games to replace reality, they played pretend. People still enjoy getting lost in a book. These online communities like WoW and Second Life do the same thing as these other methods of escapism, but they make it easier to escape, and they offer a greater depth of escape. First of all, anyone with an internet account can log in and sign up for one of these things, and it takes literally minutes to get started. Second, the number of different “realities” you can find yourself in, or make for yourself, on the internet is quickly approaching infinity. For any interest you have, someone already probably has it, and is probably talking about it in some chat room. Thirdly, the internet has the ability to integrate multiple media (in the case of WoW—images and sound) to create a very realistic environment. Because of the ease with which one can now escape to the internet, more and more people are using it to escape the everyday world. Is this healthy? Are people better off spending so much of their lives devoted to these fake worlds? I really can’t answer this, and any opinions would be quite welcome, because, frankly, I am stuck. From a psychological viewpoint, spending hours each day on Facebook or WoW does seem a bit messed up, because it precludes dealing with the challenges of your real life. On a more philosophical note, however, one could argue that these multiple, realistic worlds presented by the internet allow people to choose which reality they prefer to live in.

As an example, think of the holodeck from Star Trek, which can perfectly imitate pretty much any environment, and any person, you want. If you had a choice, how much time would you spend in the holodeck each day?

Saturday, March 10, 2007

Memes and cultural evolution

I once read a book by Daniel Dennett titled Freedom Evolves. It was a neat look by a biologist/philosopher on free will vs. determinism, and how the two can be reconciled by the idea that free will can evolve. One part that he talked about extensively that ties in well with what we’re reading in class is the idea of memes­. Memes are basically just cultural genes, for example: brushing your teeth, Ugg boots, marriage. Anything that can be passed on over generations like a gene, but is not genetically inherited, is a meme. So all of these fads we talk about, the things the coolhunters are trying so hard to keep track of, are memes.

I think this is a very interesting way to look at fads because it lets us talk about them in the context of natural selection and evolution. Because, really, fads evolve in the same way genes do. Remember pogs? I barely do. That is an example of a meme that was evolutionarily unfit, so it eventually went extinct. Religion, however, is one of the oldest and most thriving memes out there. Natural selection, or I guess I’d have to call it cultural selection in the case of memes, is constantly weeding out these fads—only the fit survive. The difference between natural and cultural selection is that with cultural selection, we get to play God. I believe that this makes memes infinitely more complex than genes, but in a paradoxical sense. If we are the ones who determine what meme is fit, what fad is cool, then how come we have such trouble anticipating what will come in style in a few years?

This is an interesting question, and I guess it hints at the tremendous complexity of the social networks that rule over what is cool. I remember taking an online personality test, and it asked me “Do you adopt your friends’ slang more than they adopt yours?” I have always thought that question was impossible to answer—because the adoption of new slang is much more complicated than that. Few people out there are pure sources of slang. It’s not as simple as “there are the makers of slang, and the users of slang”. Picking up a catchphrase, just like any other fad, even at just the level of the individual, is a complex interaction: you first appraise the catchphrase in your own mind: “Do I myself think this is cool?” You then appraise the person saying it: “Is this person cool, and can I trust that a catchphrase they use would be cool?” Finally, if you’re creative, you might ask yourself: “Can I make it even cooler?” and there you can become both an acceptor and a creator of slang.

I have to wonder to what extent “coolhunting” is a science and to what extent some people just get lucky when guessing what will be cool. After all, if thousands of advertisers and marketing firms are blindly guessing at what will be a hit, then statistically, at least one of them has got to be right. Do coolhunters really know more than the rest of us? Or are we all equally blind? In this world of fads that we ourselves created, it is possible that something emerged that is nothing like we expected, and nothing that we can hope to understand.

Tuesday, March 6, 2007

Ethics in animal research... and ice cream

I went to a meeting of the neuroscience club with my friend tonight. We were talking with a fairly famous researcher of Parkinson’s disease (whose name I shouldn’t give out because apparently she’s always getting death threats from PETA) who works with primates in her research. We had a discussion with her on the ethics of using animals in research; the discussion took place in Lakefront on Langdon, and the club bought us all ice cream to eat during the discussion.

The cool thing I found in my hunt tonight was the format of the discussion we had. Why Lakefront on Langdon? It is loud, crowded, and difficult to fit a group of 20 or so students (which is about how many we had). And the ice cream was an interesting (and nice) touch. Maybe this was the club’s way of “cooling” a relatively serious discussion. By adding the ice cream, and by holding it in a place that is generally reserved for less formal affairs, we were making the discussion less formal, and less serious. Does making it less serious make it cool as well? I would say it does. I mean, congress has the same sort of discussions that we had tonight, only the format is very different. We took advantage of the academic community. We were a group of young people interested in science as well as ethics, we had the opportunity to talk to a leading researcher in her field, and we weren’t stiffs about it. That’s pretty cool.

Friday, March 2, 2007

The emergence of viruses


I found the Duncan Watts reading “Epidemics and Failures” from Six Degrees to be very exciting. I read The Hot Zone this fall, and I was fascinated by it. I was taking a course at the time called Contemporary Population Problems for which we were required to write a paper on some population problem. Spurred on by my reading of The Hot Zone, and my wonder at how a disease like Ebola that has such horrible effects on the individual scale could have such little effect on the population as a whole, I decided to write my paper on infectious diseases, and under what conditions an infectious disease will be a population killer. This now appears pertinent to what we’re discussing in class, so I’d like to talk a little about one point I discovered while researching for my paper. I read an article written by R.M. Anderson titled “The Transmission Dynamics of Sexually Transmitted Diseases: The Behavioral Component” (I unfortunately cannot find the article online now—sorry, no link.) In the article, Anderson defines the basic reproductive rate R0 as the “average number of secondary infections generated by one primary case in a susceptible population of defined density” per unit time. He worked with a model that assumed that R0 is a product of the parameters β, D, and c, where β is the probability of transmission of the virus per partner per unit time, D is the duration of infectiousness, and c is the average number of sexual partners per unit time. Each one of these parameters must be looked at by examining the virus and the population it affects—a population in which safer sex is practiced, or a population in which a higher proportion of people are monogamous, will obviously be less affected by the virus than other populations would be. I’m assuming we’re going to have to take this somewhere in class pertaining to cultural viruses or memes or fads or something. How does the Anderson article apply? First of all, it brings up the interesting idea that we can actually quantify these dynamics. Second of all, on a more conceptual note, it says that we have to look at the emergence of a cultural fad in the context of the state of society at the time. So this kind of brings us back to the idea of the kairos, as mentioned in the Blogging as a Social Action reading. We can only understand why a cultural fad emerges if we understand the interaction of the fad with the specific culture.

Wednesday, February 28, 2007

Robin Hood Morality Test

I am disgustingly ill, so I will make this short and sweet and talk about my favorite online quiz. The Robin Hood Morality Test is short and I would greatly recommend taking it. Basically, it presents a hypothetical situation starring the characters Robin Hood, Maid Marion, Little John, and the Sheriff. It then asks you to rate how morally you think each of the characters acted. Given how simple the story is, I was amazed at how sharp the discrepancy was between my friends' answers. I wouldn't pay too much attention to what the test says your results mean--it didn't make much sense to me. But definitely take the test. It's fun. (By the way, my ordering, from most to least moral, was Marion, Little John, Robin Hood, and the Sheriff).

Sunday, February 25, 2007

Kicking the English language off its pedestal

I was stuck by the use of concrete form in Katherine Hayles’ Writing Machines. By concrete form, I mean the changes in font, text size, and the sort of “bubble text” she uses for emphasis. This concrete style is something you rarely see nowadays in texts written for adults, and I’m not sure how I feel about it. I remember how, as a kid, I used to play around with different fonts that I felt were appropriate for the sort of thing I was writing. For my more serious writing, I would stick to Times New Roman. For less serious things, I would use Comic Sans, which is always a good time. I would even occasionally use some impossible-to-read cursive format for things that I felt should be fancier. Eventually I grew out of this habit, as I am sure did all of my peers. My friend summed up the rules of writing well: “For any type of writing you want to do, you should only have to use two styles: your basic, Times New Roman or Garamond font, and, when you want to emphasize a point, that same font italicized.” This is what we’ve come to expect when we read a text, and we are thrown off if we see anything else. And I know this is starting to sound a lot like my last post, but it seems that people take this sort of writing less seriously. Once again, I’m basing this largely off of my friends’ reactions to the Hayles’ text. One of them said that the changing fonts and text sizes made it difficult to read. This is a legitimate complaint. If your goal in writing is to make your point understood (which it obviously wasn’t in Lexia to Perplexia, but that’s a different topic entirely), then you should not write something that is offensive to the eyes. So is it worth it to use concrete form? Can concrete form help convey your point in ways that a single font cannot?

When I think of concrete form, the first thing that comes to mind is the sort of concrete poetry we learned about in middle school. For instance: “Easter Wings” by George Herbert. The concrete form of the poem—the decreasing and then increasing verse length—represents a pair of wings, and also the decreasing and then increasing “goodness” of the state of the narrator. Do we really NEED this? Does forming the lines in a pair of wings really help the point that much? It seems to slam the point in—we get it already.

I think I am a little biased here. I have come to believe, over the years, that in good writing, good ideas should be able to be expressed in a manner that is totally abstract. By this I mean that the actual words on the page are nothing more than a vehicle; the author and the writer should, once they look at a sentence, completely ignore the physical letters and immediately translate them into their meaning. Ideally, the writer should be able to directly transfer his or her ideas straight into the head of the reader. This is obviously not possible, so we require the words on the page to be the middle man. And if the writer does his job well, he should be able to use them as a perfect vehicle. Hence, the use of this funky text shows an inability to express oneself using solely the English language. Hence, the use of concrete form shows an incompetent writer. Ouch. Now, here’s a fatal flaw in the argument I just made: it assumes that the English language, if properly used, can be a perfect vehicle for ideas. This is not true at all. It is full of ambiguities; two intelligent people who read the same thing may interpret it different ways. Even the accepted use of italics in “pure” writing acknowledges the failure of written English to convey emphasis. So why put the English language on a pedestal? Why does good writing need to be pure of anything concrete? Why is the abstract, with its proven faults, so wonderful?

Concrete form, just like the English language, has a set of benefits and drawbacks. One the one hand, it appeals to those who learn best visually. On the other, it can shake people up if they are not used to it. If used well, concrete form can strengthen writing. However, it is not always used well. Is it worth it to propose the teaching of concrete form? I suppose the teaching of new media already does this to some extent. I think the most important thing is to realize that concrete form does indeed have strengths that pure abstract writing does not, but that, like any unusual form of writing, it needs to be used with care, lest you scare away your reader.

Saturday, February 17, 2007

Our generation and new media--experts, but amateurs

I wanted to expand a little more on my comment to John C's post about what college english should be. John made a very fair point when he said that English should not necessarily be about teaching networking because "doesn’t our generation, more than ever, already understand and take advantage of this fact?"

I also thought this when I first found out about the nature of this course. However, I came to realize as I spoke to other students the worth of learning networking in English. When I first told my friends that I was taking a course on "rhetoric and network culture", and that many of our assignments would be web-based, my friends immediately assumed the class was a sham.

Why would this be? What makes writing in a blog less worthy that writing a paper? I believe that people of our generation are in an interesting situation in that these new media--blogging, forums, wikipedia--began to flourish when we were teenagers. Hence we took advantage of them in the way teenagers would: we used them for social reasons. Every aspect of Web 2.0 was just there for our amusement. Blogs were for whining about the world. AIM was for chatting with friends. Forums were for geeking out, and so on... It seems only natural that once we went to college and grew out of the habit of these more childish things we would come to view the media that made them possible as childish, too. We are biased, because we learned to use Web 2.0 specifically for frivolous purposes, to believe that Web 2.0 can only be used for frivolous purposes. However, I have seen throughout my time in this course that this is not the case. People in academia have studied these new forms of media extensively; they have also used them to portray their findings, as in this website made by Daniel Anderson.

It is likely that if these new media were taught to children in K-12 education, along with more traditional English writing, these kids would come to view blogs in the same way they view papers, and they would have an expanded arsenal of ways to express themselves. They would not come to see these new media as any less inherently serious than old media.

Teaching children to use new media would also have the benefit of appealing to a wider range of children. Some people have a better visual memory; some have a better auditory memory. If schools incorporated new media as a way of teaching children, as well as incorporated teaching of how to use new media, this would allow children to find their niche in what works best for them to learn.

My point here is that, although people of our generation are well versed in the technical aspects of Web 2.0--that is, we are experts at using the web to do about just anything we want in the social domain--we are sorely lacking when it comes to an understanding of the full extent to which new media can be used. They can be used to argue a point just as a pen and paper or a podium can. And a person who knows this, and knows how to use this, weilds a tremendous amount of power over anyone who thinks that the art of writing and persuasion is static.

Tuesday, February 13, 2007

Quantum Teleportation to a Positive Test for Avian Flu in under 5 minutes


So I was in quantum mechanics the other day, and our professor was talking about quantum teleportation, which would allow one to “teleport” information over long distances. The efficiency of quantum teleportation would make it a much better “hot medium” than printed text or movies or anything McLuhan would have conceived of back in the 50’s. However, this idea is perhaps a little complex and the explanation would be far too lengthy for a blog post to do it justice. Yes, I believe this would be better suited by academic writing, as much as we all despise it. Yes, academic writing: I could write it in a manner similar to how I will write my chemistry lab report that is due next week, assuming I don’t kill myself first by spilling sulfuric acid all over my skin. Speaking of things that will kill you dead, we learned in my Organic Chemistry lecture about the avian flu and the new drug to treat it. This drug, called Tamiflu, is currently being stockpiled in the homes of chemists and neurologists everywhere—think they know something we don’t? But one shouldn’t get to too paranoid about these things: we learned in probability that for viruses such as the avian flu that have a low occurrence in the population a positive test for the virus is not definite. Even if the test is accurate to 99%, the odds that you actually have the disease, despite the fact that you tested positive, are quite low.

Saturday, February 10, 2007

Can Web 2.0 beat Big Media?

Over the summer, I read a great book called Can’t Buy My Love by Jean Kilbourne. It’s about the power of advertising, and the negative effects it has on people. The part of the book that struck me the most was the idea that advertising can control the information we get. Most of the media we see is controlled by five major companies. A particularly scary aspect of this is the news station side of things. For instance, News Corp, one of these big five media companies, owns FOX TV and FOX News. Fox includes advertisements for alcohol during its commercials. This gives the alcohol companies that advertise with FOX a fair amount of power over what FOX programs, and they wouldn’t be very happy if FOX News were to say, take a scathing look at the effects alcohol has on the liver. The conflict of interest here is a little scary. The companies that advertise for these big media companies do have real control, at least to some extent, over the information we receive.

What we talked about in class last week pertains strongly to this. It is becoming easier and easier for people to put their voice online, and for people to actually read what these average people are saying. There are a lot of people out there who know a lot about something who, without tools such as Wikipedia and blogs, would have no possibility of sharing what they know with people outside their sphere of acquaintances. Now, people are starting to trust Wikipedia for information. If something exciting happens in Delaware, some guy from Delaware will post on his blog or on his favorite forum about it before you’d see it on the six o’clock news. This has the potential to put millions of people’s ideas and opinions out there for people to read, to accept or to reject as they see fit.

This diffusion of information—that what we see is now not only controlled by five major companies but by whoever has the notion of creating a website—is quite interesting. It certainly gives people more options as to where to get their information. It has definite disadvantages as well. The web contains so much stuff: how do you filter through all the crap to find something actually interesting—because out of the billions of websites out there, surely someone has something good to say. With this new technology, where pretty much any information you want is out there, the problem is no longer one of access to the information, it is one of finding a needle in a haystack.

Another possible concern is that impressionable minds might take too seriously what they read on the net. Some of the opinions on the internet are bound to be way more outrageous than anything you'd see on FOX News, and if enough people are stupid enough to believe some of the more outrageous things on the web, this could cause serious problems.

I believe that these are pretty good problems to have, at least compared to the alternative—too much information, even if most of it is junk, is better than information that is controlled by a handful of companies.

Hmm... now that I do further research on News Corp., I see that they also own MySpace, which effectively makes my title "Can Big Media beat Big Media?" Darn. I suppose this raises another interesting question: could it ever come to pass that the material we see on the internet becomes just as filtered as what we see on tv? We are the ones who make the blogs and the wikipedia articles, but at the end of the day, it is some big media company that actually owns the forum or the blog provider. Could the content of the internet become controlled by these companies?

Tuesday, February 6, 2007

Oops

I already wrote my Thursday post, and it did not include a single image, or link for that matter. So, in amends, here are my pictorial thoughts on what writing is. The first picture is the Swedish Embassy in Second Life that Sweden actually bought. The second is a wonderful essay possibly written by some high-school age kid Jeremy Lavine. To see more of his works, check this out. I believe that the rest are pretty self-explanatory.

Monday, February 5, 2007

Can new media free the writer?

When I think about my experience with academic writing, the first thing that comes to mind is the experience I had today in my chemistry lab. I was writing up my in-lab procedure, in which I describe the steps I am taking in the experiment, any observations I have, etc… and I was making notes to myself in the margin of the page describing the reaction in an informal manner (saying things such as “I wish I were able to measure this more accurately” and “this distillation is going very slowly…”). We give our lab notebooks to our TA at the end of lab so she can look over them and give them a grade. When I handed mine in, the TA took points off for these informal comments made to myself. Now, my TA is in no way a horrible person—she is the best chemistry TA I’ve had, and she was just following the procedure set forth by the university. Despite having no one in particular besides myself and “the man” to blame, I was quite frustrated by the experience. These notes were obviously to myself, written in parentheses: it was obvious they were not meant to be taken as part of the procedure. However, my grade was still hurt because I showed the slightest hint that a human was writing this. This extreme case is an exaggeration of the norm I’ve experienced so far in academic writing. Academic writing is tailored to be cold, formal, and, at least in the case of my science classes, show no voice. This idea of a voice is interesting. In most of the writing I’ve done for college, the “ideal” writing style is one in which you really cannot tell that a person wrote it. Ideally, the piece of writing should look like it was something squeezed out from the collection of knowledge in the world—the writer is no more than an instrument for taking what is already known and putting it on the page. In some cases, this is certainly appropriate. Does a chemistry write-up really need a voice? If the point of the writing is simply to advance knowledge, then what good is a voice? Well, we’ve already learned that it is some good. That is what rhetoric is: it is the method of using style, using a voice, to clarify an argument. In describing a confusing point, in an English analysis or in a chemical analysis, one can use his or her voice to explain the point in the manner that suits him or her best.

Is new media more accommodating to the writer’s voice? What are some examples of new media? Wikipedia, blogs, websites… I do believe that these allow at least slightly more freedom for the writer to express his or her voice—but that may be just because I don’t generally think of those things as being graded. Once put into a classroom, despite the possibility that anyone in the world can read these things, the student is, when it really comes down to it, still writing for one person and one person only: whoever gives out the grades. You can be conscious of the world as your audience when you are writing your blog, but the world does not hold the red pen. Can you ever have freedom in your writing if there is always one person who will define what “good writing” is? How does new media change this?

I am being a little harsh here. If nothing else, teaching new media equips students with the skills to engage in a new and different type of writing, one which is every day becoming more widespread and more accepted in academia and professional life. Just giving students this new tool in some way frees them, because once they leave college—the land of the red pen—and are given the choice to write in whichever way suits them, they have yet another mode of expression to choose from.

Saturday, February 3, 2007

Rhetoric in a new light


My thoughts on rhetoric have changed somewhat since my last post, largely influenced by class last Tuesday. As I said in my earlier post, I used to believe that the aspects of rhetoric, as defined by Herrick, other than argument, were a waste and were of no use other than to cloud the argument. I didn’t realize the extent to which rhetorical style can be used to clarify the argument. The two clips from “Thank You For Smoking” made a distinction for me between “bad” rhetoric—which is used to cloud a bad argument—and “good” rhetoric—which can clarify a strong one.

The first clip was of Nick speaking to his son’s class about his job. He used rhetoric in this scene to put a good spin on what he does for a living and on cigarettes themselves. I felt like this is a perfect example of what I’m calling bad rhetoric. He is using his knowledge of language and style to manipulate people with less knowledge of rhetoric (in this case, children) into making them believe in a false argument. This type of rhetoric is the type that people criticize heavily, the kind of rhetoric people refer to as “mere rhetoric”. “Mere rhetoric” is rhetoric with no strong argument to back it—it is simply the stylistic elements of rhetoric, cleverly crafted so as to make people forget that your actual point just sucks.

The second clip shown was Nick explaining his job to his son, and unlike the previous clip, here he tells the truth. Explaining the duties of a lobbyist to a kid is no easy task, however, so he uses an analogy about an argument over ice cream won by “mere rhetoric” to get his point across. What Nick does here is what I would call good rhetoric. He is explaining a difficult concept (the trickery of language he uses on a daily basis as a lobbyist) to his son, and he is using style (analogy here) in order to present his point in a way that his son can understand.

This is an example of the usefulness of the stylistic aspects of rhetoric: if you’re arguing a point worth arguing, then the point is obviously not that clear-cut; it probably requires a great deal of study on the part of whoever will be arguing it. Rhetoric allows the speaker to present, in the clearest manner possible, a complicated point to an audience who is less knowledgeable about the concept.

So my conclusion is that rhetoric may be used for good or evil. In the hands of someone with nothing good to say, it will only hide that fact. In the hands of someone with a strong point to get across, however, it can light the way.

Our Wikipedia discussion was also interesting. I'd have to say I'm very pro-Wikipedia, vandalism and all. After all, without wikipedia vandalism, how would we have this wonder?

Wednesday, January 31, 2007

Primer


I just watched the movie Primer with a friend last night. It was the third time I've seen it, and despite that fact, and despite the fact that I spent one day at work over the summer carefully studying the lengthy wikipedia page on it, I still do not completely understand it. It is without a doubt the most confusing movie I have ever seen, but I really enjoyed it. It is the only movie I know of where the concept of time travel was seriously examined. (Given its premise, I think it's only appropriate that the plot was ridiculously complex and nonlinear). I appreciate the fact that the director, Shane Carruth, was a math major and an engineer. He knew enough to build what seems, at least from a non-expert point of view, a realistic time machine. For instance, the device only allowed you to travel back in time to the point when it was turned on. I would seriously recommend, if not watching the movie, at least checking out the wikipedia page (there are no serious spoilers in the movie you'd need to worry about). Note the nine different timelines, and the six or seven different versions of the main characters.

Thursday, January 25, 2007

The Spell of Rhetoric

I have discovered that I may indeed edit posts after the fact, so there is no reason not to start this early.

My impressions of the word "rhetoric" prior to reading the assigned reading are the following:

My limited experience, mostly through books and movies, of the word "rhetoric", has shined a negative light upon it. The immediate definition that comes to mind is "using language to manipulate logic to convince people of an argument".

Wikipedia's definition of rhetoric is the following: "the art or technique of persuasion through the use of oral language."

So I'm not too far off from what they have to say, except my word "manipulation" certainly adds a nice touch of cynicism.

The way I see it, a computer can use logic, but not rhetoric. For example: my friend plays nomic, a game that allows the players to create and change the rules of the game (including, for instance, the rules that determine how to win). He had been playing blognomic for a while, in which people post in blog-style proposals for new rules. He then switched over to perlnomic, in which players propose and enact new rules using the scripting language Perl. Using blognomic, the ambiguities of English allowed users to manipulate the rules based on how they are phrased. This is impossible with perlnomic, because the programming language makes everything completely unambiguous.

So, I suppose my point is that anything that tries to persuade by using things other than just pure logic is by nature unfair, and, well, illogical.

But I'll see what the reading has to say about this...

Ah, I see. Well, based mainly on the Herrick readings, I have reached the following conclusions:

I believe in the good of argument, one component of rhetoric. It brings up points the audience may not have considered, and it advances the overall knowledge of the people involved.

However, I believe that the other components of rhetoric as put forth by Herrick--appeals, arrangement, and aesthetics--have no value other than as an art form. Using them to persuade seems to me to be nothing but trickery. They are a glamour, a spell used to disguise the truth in the argument.

Any point about which one would wish to persuade can be broken up into the following two categories: first, it could be a point in which there is an absolute truth, wherein there is a right decision and a wrong decision. In these cases, obviously logic should predominate, and the use of anything else to persuade simply clouds the issue.

The second case is one in which there is no absolute truth. In this case, it must be up to each person to decide him or herself what is "right". Of course, information may still be useful to them in making this decision, so argument is still a fair tool in this case. By listening to the argument and making arguments themselves, each person may advance his or her view of what is right. However, in this case, relying on appeals, arrangement, and aesthetics to convince someone of a point, does nothing but manipulate them away from what they actually believe in.

So, I have reached two main conclusions from the readings: first of all, rhetoric as a tool for persuasion is not all bad, however, its only redeeming quality for the purpose of persuasion is argument. The only use of aesthetics or arrangement that may actually help is in making the point more clear.

Second, although I believe that arrangement, appeals, and aesthetics have no place (other than promoting clarity) in persuasion, they are still a valid art form and are perhaps worth study. A great speech can be great not only for its persuasive nature, but also for its beauty in language, in which those three "A's" can help a great deal.

I am not hardcore enough to edit this in notepad.

I should learn HTML.