Processing Project – First ideas

After looking at the different ways I can used sound and video in Processing, I decided to mainly focus on developing and creating Processing pieces using video rather than sound. This is because, I feel that in the space provided for us to put the interactive pieces, sound will not work very well as it is normally a busy environment where many people will be creating sounds. On the other hand, with video, there is no sound being produced only visual, so the user can easily identify what it is they’re doing and what the interactive piece actually does.


Over the Christmas holidays, I looked at various different ways to use webcam my basis for my Processing piece. One of the main drawbacks of using webcam in this project is that i’ve never used webcam in Processing. This means I will have to research and learn the code for simple webcam functions as a basis to create from that. I chose to develop my idea based around webcam as the user can see themselves and instantly know it is interacting with them. I looked into how easy it was to input a webcam into Processing. Surprisingly, it is far simpler than what I first interpreted. I thought it would be a difficult process to input. My first idea was just playing around with the webcam to begin with. So my first Processing idea was just using a simple webcam input function. The code I used for this was:


Capture video;

void setup() {
 size(320, 240);

 video = new Capture(this, 320, 240);

void captureEvent(Capture video) {;

void draw() {

 image(video, 0, 0);

The outcome of this code is:

As you can see, the sketch has produced a simple webcam. Now that I know how to input a webcam, I thought I should play around with the different functions that Processing has to offer.

With Processing, there is a lot that I can change with the webcam. Face recognition is one of these functions. With face recognition, a library has to be downloaded in order for Processing to recognise the code being used to make the face recognition work. This library is openCV. I looked into using the openCV library as a basis of my Processing piece. I ventured into the possibility of using face recognition in my Processing project. With this in mind, I made a simple face recognition sketch. The code I used for this was:

import gab.opencv.*;
import java.awt.*;

Capture video;
OpenCV opencv;

void setup() {
size(640, 480);
video = new Capture(this, 640/2, 480/2);
opencv = new OpenCV(this, 640/2, 480/2);


void draw() {

image(video, 0, 0 );

stroke(0, 255, 0);
Rectangle[] faces = opencv.detect();

for (int i = 0; i < faces.length; i++) {
println(faces[i].x + "," + faces[i].y);
rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height);

void captureEvent(Capture c) {;

The outcome of this code was:

Screen Shot 2015-01-25 at 19.36.39

As you can see, the code has recognised my face as an object. I thought this could be a basis for my project however, if I wanted to personalise it more and make it something relatable to a media concept which the brief suggests I do, I would have to really research this area of Processing and there is a risk that I wouldn’t get it right. It’s not necessarily i’m not confident in my ability, it’s just I would rather do something which relates to a media concept which I am interested in and also, it is difficult to change this type of Processing. Saying this, I haven’t decided on my media concept as of yet. This is because I am still playing around with the different interactive possibilities webcam brings in processing. Another Processing function using webcam I wanted to look at was Blob Detection.

Blob Detection

Although I didn’t want to use face recognition as my idea I still wanted to use webcam as the basis for my Processing project. When I went to the science museum, one of the interactive displays was using detection of whoever was in front of the screen then projecting them onto the screen. This sparked an idea. I looked into using the same detection with parts of the body. I wanted it so it would only detect peoples hands, I wanted to relate it to when people were younger and used to draw with their hands. However, I drastically underestimated how difficult this would actually be. I tested out blog detection just from the one provided in the library to gauge what it would turn out like. This was the result:

Screen Shot 2015-01-25 at 21.02.39

As you can see, the blog detection is detecting blobs (which it should) however, it isn’t detecting my hands. That is quite an ignorant thing to say as obviously it won’t just detect hands because I haven’t coded it at all and it is just doing the basic blog detection. I enquired about blob detection and whether it’s a viable to use for my project and I was advised to steer away from blob detection and my idea of hands being detected. This is because the code required for this to happen is extremely complex and I am simply not at that level in Processing terms to accomplish this. If I had more time with Processing, I would pursue this idea and make it my final idea however, I don’t have that luxury so I must look in a different area for my idea.


One suggestion one of my peers suggested was using Kinect as it already detects people and objects in the view of the camera. This would resolve one of the issues I had with Blob Detection as I didn’t know how to make it so it detects single objects. Although I hadn’t properly looked into Kinect as a viable option for my Processing piece and I didn’t even realise that I could do it. So my next course of action was to rent out a Kinect camera from the IT services. I needed to download a library which makes Processing compatible with Kinect. This was SimpleOpenNI. However, when I tried to run the Processing library with one of the Kinect examples, an error message appeared:


This in no way can be good. I looked up some of the error messages that were appearing and it meant that I didn’t have the right Kinect camera. The right Kinect camera is an old one rather than the newer one which is the one I tried to use. The IT services didn’t have the old Kinect cameras. This meant I couldn’t use Kinect as a basis for my Processing.

What’s next?

Now that I have scrutinised a few ideas and tested whether they are viable for my final idea, I know for certain I am doing webcam based interaction. It is somewhat disheartening that these ideas aren’t viable as my final idea because of the reasons stated above but it is also a positive that I have found these issues now rather than following these as the final idea and discovering these problems nearer the final deadline. I feel that the fact I haven’t chosen a specific media concept yet has hindered my progress for my ideas. My next course of action is to finalise what my media concept is then base my idea around this concept.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s