OVERVIEW: Nathan needed a website to showcase his resume, headshots, demo reel and reviews. He had no previous site to build off, so we started the project from scratch and tried to create a unique destination to display his work.
A review of several sites belonging to prominent stage actors left much to be desired. Although many feature information about the artist and links to past performances, most websites lack any attention to design, layout and readability. Our goal was to create a space that put a prominence on Nathan’s headshot and other performance images, since casting directors, agents and managers are his primary audience.
IMPLEMENTATION: We used WordPress as the content management system. Visitors will notice that the homepage features a Flash slideshow to display a gallery of rotating images of Nathan. The transparent navigation bar on the left can be minimized to view the whole image. We also added pages to display his resume, reviews, voiceover reel and a customized contact form. While the site is simple, it quickly communicates Nathan’s skills and provides easy navigation for visitors to easily find the information they’re looking for.
Gotham Guide is New York’s first multimedia tour using QR code technology to add a contextual layer of information on top of Manhattan’s city streets. Anyone armed with QR reader enabled mobile phone can snap a picture of a Gotham Guide code and learn more about the location where they stand. This project includes a custom QR code reader (J2ME), a website (gothamguide.com) and coded logos designed and placed strategically throughout the city.
The history of Manhattan is fascinating, almost mythical. I remember when I first moved here, I would walk the streets for hours. But as much as I loved its aestheics, I really had no idea what had happened where. Where was the steakhouse in front of which Lucky Luciano was killed? Where was Andy Warhol’s Factory, Allen Ginsberg’s apartment, or the first venue where the Grateful Dead performed on the East Coast? I KNEW they were here, but not exactly WHERE.
I conducted research into the history of QR Codes, and I learned that although the technology is almost a decade old, it still hasn’t gained too much popularity here in the USA. However, the technology is leveraged in Europe and Asia for marketing/advertising, food labeling, SMS and more.
Because QR codes are more recognizable by visitors from other countries (who also have mobile phones equipped with the software to decode them), my primary audience for this project is tourists from Europe and Asia. Many of them will appreciate the opportunity to learn about New York as they wander around and encounter the codes unexpectedly. Others might be interested in obtaining a map revealing the locations of all the codes.
There are also early adopters of technology who will know what QR codes are on sight and decode them for the sheer joy of finding such a code in the wild.
If they don’t already have the software for their phones, users will be able to download a customized application built by myself and Aram Chang from the Gotham Guide website, where they will also obtain a map of New York with the locations of the QR codes highlighted.
When a user encounters a code, he or she can snap a picture of it with their phone, which will then take them to a mobile website where they can listen to audio, view a video clip or read a snippet from a webpage that will provide contextual information about their current location and the buildings surrounding them.
Since this was created in concert with several classes, there are a few pieces to the user experience. There is a website where users can download the maps with QR code locations and the QR code reader for their phones.
There is also a mobile phone application for phones on the J2ME platform and links to programs for other phones.
There are also QR Codes with the Gotham Guide logo that have been placed around the city linking to original video content from one of New York’s premier tour guides, Luke Miller from Real New York Tours.
What’s exciting to me about this program is the opportunity for people to serendipitously discover new things about the buildings they pass each day and for tourists to use advances in mobile technology to interact with the city in a brand new way.
Thanks to Nick Bilton and Shawn van Every for their assistance in bringing this project to fruition. Also special thanks to Luke Miller and Real New York Tours for providing video content and valuable user feedback.
Urban Paranoia is a street game where teams score points for identifying the tags of players on opposite teams and then sending an SMS with those tags to the “messenger,” all while protecting the messenger from getting caught in the process!
3 digit numbers for each player
One cellphone per team
One cellphone for referee
Play Area/Boundaries: The referee will review the boundaries for the game with all of the teams before the round begins.
Team Setup:There will be two-three teams, each with 5-6 players.
Referee: keep track of the score and communicates with team messengers during game play via SMS.
Text-messenger: responsible for relaying the numbers from tagged players on opposing teams via SMS to the referee to score points.
Players: responsible for spotting the tags on opposing teams and relaying it to the messenger, all without giving away his/her own number in the process and protecting the identity of the messenger.
There is no physical contact between players at any time.
No players may use a cell phone or other electronic communication device except the designated messenger.
Numbers, or “tags,” must be worn so that the number is clearly visible on the player’s back.
Everyone must stay OUTSIDE during game play. You are not allowed to enter any buildings.
Each team will be given a bag containing the jerseys at the beginning of the game and assigned a starting location.
Once the team reaches their respective starting locations, they will vote on a messenger.
The messenger will then send an SMS to the referee with his/her tag number to indicate s/he is the messenger for the round.
Once all teams have checked in, the referee will send an SMS to the messengers informing them that game play has started.
Each round is 15 minutes long.
Once a player spies the tag of a player on an opposing team, someone on the player’s team must pass the tag number to the messenger, who will send the tag via SMS to the referee to earn a point.
Scoring and Winning:
Teams will receive one point for each unique tag from an opposing team sent via SMS from the messenger to the referee. The team with the most points at the end of the round wins.
If a team is able to identify an opposing team’s messenger and SMS the number to the referee, that team automatically wins the round regardless of how many points the other teams have.
If a team submits a number not being worn by an opposing player (this includes numbers being worn by their own players) they will lose one point.
Game Designed by Aaron Uhrmacher, Asli Sevinc, David Golan, Thomas Robertson
Instead of displaying characters on the screen as the user types, the Chromatic keyboard displays keystrokes as RGB pixels in the shape of squares.
The colors were each hard coded to specific letters, which is what I wanted. I liked the look of these encrypted messages, but I wanted to have some fun and see what other types of patterns I could create. Next I only typed with two different character strokes in a random pattern:
Since I’m left handed, I decided to try an experiment and just type with only my left hand as fast and as consistently as I could. Here’s the resulting pattern:
Next I tried with the right hand:
Since I’m pretty bad with keeping rhythm, I wanted to try and keep the beat listening to a song and typing with only two keys. I mixed the colors up a bit just to make it interesting:
Finally, I made this sketch where I tried to create a weave pattern by timing my keystrokes to varied rhythms:
Horiball (meaning bouncing the ball horizontally against the wall).
Hori-ball Play Testing
Horiball is a game for 2 teams of 2-3 players, played on a walled court with a Game Ball (GB) and 2 Goals (Balance Balls) positioned in opposite corners. To score, the team with possession must hit the opposing team’s Goal with the GB, either directly (1 pt) or indirectly (3 pts). All GB movements (apart from attempts to score) must bounce off the court’s walls.
Playing Hori-ball with our Big Games Class
GB can be passed (multiple times) or dribbled (once per player contact) off the wall
Player holding GB cannot move around the court (pivoting allowed)
Players without GB can move freely
All players must stay outside a 5ft arc of either Goal during gameplay
Anytime the GB hits the floor (aside from serve) and doesn’t score, possession changes
On missed shots, possession changes, and offensive teams starts within the arc of their own Goal
Game begins with a Serve (serving team determined by Rock, Paper, Scissors), and scoring team Serves after each successful goal
Serving team starts behind court’s back line – receiving team starts anywhere on court
GB is thrown (or hit) against back wall
Serving team cannot cross the court’s back line until GB contacts back wall
GB must land in front of mid-line
Excessive physical contact is forbidden
Swatting the GB out of player’s hands also forbidden
All penalties result in turn over
Absent a certified referee, all disputes are settled by Rock, Paper, Scissors
Created by: Aaron Uhrmacher, Cynthia Hilmoe, Julius Schaffer, Eyal Ohana and Syed Salahuddin
The object of the game is to hit the designated target with your laser before the other team.
The player holding the laser cannot leave the starting room.
The target cannot be moved once the game has started.
There must be one referee next to the target for judging.
The group is split into 2 teams
Each team receives a laser pointer and 5 mirrors to reflect the beam to the target.
There are 3 rounds of increasing difficulty.
The team with the most points at the end wins.
Whichever team hits the target first receives points for the round. The losing team receives zero.
Teams receive one point for each rebounded surface.
Whoever has the most points after 3 rounds wins the game.
In the case of a tie, an additional round is added with a target selected by the judge.
Created by: Amit Snyderman, Aaron Uhrmacher, Matt Richard and Julius Schaffer
Once we played the game, we realized that there were a couple of changes that needed to be made. First, players should be able to attempt to reach the targets in any order. Having 20 people trying to hit the same target made for chaos and a bit of danger. Also, there needs to be rules for interfering (on purpose) with the other team. Is that a defense strategy or should it not be allowed?
Voice Activated Number Counting Elevator (V.A.N.C.E.)
The interactive elevator counts the people entering and exiting the lift, then creates a customized audio interaction based on the number of people in the elevator.
The interactive elevator was created using an Arduino and the Processing environment.
When we get into an elevator, we stop communicating with each other. We stand, quietly waiting to reach our respective floors. Or if we’re with friends, we often continue our conversation at a louder than necessary volume while everyone else stands around trying to avoid eye contact. The experience isn’t interactive or engaging, and it’s certainly awkward. Obviously, it’s ripe for innovation!
Observations & Research
We began by sketching out what the elevator model might look like, working off the premise that our model will be based on the ITP elevator bank.
After riding those elevators up and down several times, we gathered some important observations:
The time between floors is 4 seconds, so our audio cues could not be longer than that;
A shared experience, like holding a door for someone, easily triggered an interaction;
Our challenges were several fold. For one, we had to count the people entering and exiting the elevator. We decided the most accurate way to do so would be using laser beams, but that proved much more difficult than we anticipated.
Our choice of materials changed many times over the several days.
Originally, we were going to build the entire frame for the elevator. But after some consideration and feedback, we decided that wasn’t necessary. Instead, we built the sensor area and taped off the area of the elevator for the prototype. This made the collection of materials significantly easier.
We were also going to use IR sensors to track whether people were entering or exiting the elevator. Again, after some research, it seemed to make more sense to use lasers with photo cells since they would be more accurate. Of course, that meant we would also have to focus the lasers to ensure the break beam functioned correctly.
Another challenge was trying to account for whether the door was open or closed in the simulation. We decided to add a force sensor that would indicate the state of the door and communicate serially with Processing.
That probably wasn’t the best choice. Even after all the code was working, we had a problem maintaining consistent pressure to activate it. We liked the idea of a force sensor because it felt like the closest way to simulate a door closing. However, we’ve since realized that using two pieces of metal would have been much easier and much more reliable.
Based on our demonstration in class, we realized that we there were some issues with tripping the counter. It was unclear whether this was a code issue or the lasers not remaining lined up with the photocells. We bought flashlights, but the beams weren’t able to focus enough to be useful. Instead, we put scotch tape over the photocells in order to capture the laser beam light through a bit of diffusion. This worked really well.
What we ultimately did was create a case for the lasers so that we had better control of their direction. We made it out of Styrofoam with two pieces of wood on either side. The Styrofoam was much better than the wood, since we could move them around a bit as necessary.
We tested the prototype on two different doors, one being the PComp class room and the other the door to the Japanese room. In both instances, we taped off an area to represent the elevator’s walls and used the force sensor attached to the door to simulate the door closing.
People brought up some interesting ideas in class, including how to account for people on crutches or in wheelchairs. While this would be important for a true model, we decided it would be too complicated for this prototype.
The other thing we realized during the demo that we hadn’t accounted for was that each person walking in will trip both lasers twice, making it four different beam states per entrant. That didn’t work with our code, so instead of re-writing it again, we decided to experiment with moving the laser to chest level so that each beam is only broken once.
Here’s a demonstration of the interactive elevator in action:
Not only was it a lot of fun taking this from a germ of an idea to a working prototype, but we all learned a tremendous amount about all of the different elements required to create an experiential product. It wasn’t enough to think only about the construction or the code. We had to observe behavior, consider how we wanted it to change and then make sure our project did just that.
As the character tries to get to the subway, he must choose directions. He holds up a brightly colored ball in each hand, each representing one of the choices. The user then holds a ball up to the video camera to designate which way the character should go.
As the character in the video tries to get to the subway, he reaches three places where he can choose between going left or right. He holds up a brightly colored ball in each hand, each representing one of the choices. The user then holds a ball up to the video camera to designate which way the character should go. The camera reads the color of the ball and the story continues accordingly.
If the user chooses the wrong direction, the character gets beaten up and the game restarts.
I started by filming all the video. I had the actor walk to the subway and when he got to three separate points, he held up the two balls. Then we had to film him going the right way and going the wrong way. When he went the wrong way, I had two actors beat him up. It took about an hour to film.
Once I had the video, there were two different pieces of code that I needed. The first was to jump between video segments. After reviewing the available code online, there were two choices. I could either have different videos start and stop or I could make one long video with all of the choices and program Processing to jump to different points. The latter seemed to make more sense.
The next piece of code I needed was for the video camera on my computer (or an external video camera) to register the different color balls and jump to the different parts of the video based on it. This was the most frustrating and most difficult to figure out, but Shawn was a huge help during his office hours.
Once I had the code, I had to test it. Obviously the hardest part was the different lighting conditions, which affected how bright the pixels were on the screen.