The warm up readings helped to describe more to me what visual perception is. I liked the first article as it related a lot of it’s ideas to categorization. The second article went into the debate of top-down processing and bottom-up processing. They are both interesting ideas which I think will be talked about in the readings I still have to complete at this moment as Gibson is mentioned in this article and is an author of a couple I still have to read.
The first part of this paper gives the background principles of optics. There is a lot of good information there that helped me to understand the rest of the paper. Even though it is pretty basic, it was a nice refresher from my CGT 511 class that I had, in which we talked about all of this.
The first real part of the paper talks about the sampling process in visual perception; how we as humans actually perceive everything we see. I liked how the author pointed out that no animal fully has panoramic vision, meaning no animal can perceive the whole environment at once. For example, if I’m staring at a computer screen writing this blog, I cannot see the person sneaking up behind me to scare the shit out of me. Pardon my french. I liked how they talked about no person is aware of a “sequence” in the visual world, only the “scene,” because I think this is true.
After reading this paper the main point that I thought was interesting was how this theory assumes that that perception is direct, as opposed to indirect when looking at other theories. Here, indirect means that we never actually perceive what we are looking at, we only see what is stored in our retinal images. It makes sense, because images we take with our mind aren’t visible to others. The way the author tries to illustrate this is that we look at something, it’s in our retinal image, and we interpret it. We don’t have a little person inside our mind interpreting it for us. He compares this to people looking at a picture, and I know for certain there is no one inside my brain looking at a picture and telling me what I see.
This study’s purpose was to test the usefulness of Gestalt’s 11 principles in relations to educational visual screen design. They redesigned a multimedia application that was used to help nursing students learn about wound management. They found, after the redesign, that all the principles were useful in helping the nursing students. This shows that those 11 principles are useful in design and should be somewhat followed when creating a new user interface or experience.
This article is about communication, and how it can influence how interfaces are designed. It goes on to say that attention is the key thing that should be looked at. It makes sense to me. If something does not grab my attention I usually will not use it, or if I do, I will not use it for that long. The article talkes about how we, as humans, use attentional signals in collaboration. These attentional signals impact how people design interfaces and systems. I liked how this article related how “we” interpret information “under uncertainty” to Sherlock Holmes. It makes sense. We take into account all the things around us (time, interaction, sensors, etc.) and then we make a decision of where we need to focus our attention.
As you can probably tell from the title this article is about visual attention in video games, 3D to be more specific. The researchers had three different hypotheses where they thought both top-down and bottom-up processing will occur, and eye movement patterns are different based on different genres of the game. They tested these hypotheses and found that top-down and bottom-up processing both occurred during game play. They found “bottom-up visual features, including color contrast and movement, subconsciously trigger the visual attention process, thus verifying the bottom-up visual attention theory.” They also found “since action- adventure games [...] are highly goal oriented, top-down visual features control players’ attention more than bottom-up visual features.” So they found that both type of processing occur when playing games. I can see that this would be true as there are a lot of things that go into video games. The storyline is one of the biggest things and I would assume that a high goal-oriented game would have more top-down processing as a person would have to figure something out from their surroundings.
Lastly, in regards to eye movement patterns, the researchers found that for first person shooters, the gamer pays more attention (or only pays attention to) the center of the screen. This seems to be the exact opposite for action-adventure games. I never really thought about it until now, but that makes sense. When I play video games, which is rare, it’s usually first person shooters. I can say that I usually only look at the center of the screen because that’s where my focus is. It doesn’t make sense to look anywhere else (you will usually die if you do).
The first thing I thought when reading this article, besides how they spelled modeling wrong in the title, was, “what is saliency?” Well I did a little digging and saliency is how noticeable an object is in relations to the things around it. So after getting that down I read the rest of the article. They go over five things, only a few of which I will mention. First, saliency is obtained from low-level features and related to how contrast is used in the context it’s presented in. Second, attention of a person will not change from one thing to another until the former is “disabled.” So basically, if I’m watching TV my attention won’t switch from it until it is turned off/muted. Lastly, the control of attentional deployment is intimately related to scene understanding and object recognition. Meaning, if I don’t understand the scene or objects within the scene I will not focus on it, and it will have never gained my attention.
So I just read the part about how we don’t have to read this for the actual readings, but I found it interesting so I guess I will keep it up. My bad in these getting so long, there’s just so much to read.