Processing, the software that is a synthesis of art and technology

Twenty years after conceiving the software that made learning how to code accessible to people who use visual language, the creators outline its history and trace its next steps.

This article was originally published on Domus 1061, October 2021.

When we began working on Processing in 2001, our intention was to bring ideas and technologies out of MIT and into the wider world. One idea was to develop a synthesis of graphic design with computer science, combining the visual principles of design with ways of thinking about systems from computer science. We also wanted to introduce a method of working with code where things are figured out during the process of writing the software. We called this “sketching” withcode. A third objective was to share what we had learnt about how to teach programming to designers – to convey this knowledge beyond the people we could instruct directly through our workshops and classrooms. We wanted to spread this as far as we could. This was all made possible by a set of programming tools we conceived specifically for making pictures, choreographing animationand producing interactive work. Over many years, we refined a set of elements for creating visual design with code. Additionally, we didn’t start this work from scratch; we built on top of existing ideas and code from people who worked in this area before us.

Casey Reas, 2010, image from Process Compendium, 2004-2010

The origins

The origins of Processing at the MIT Media Lab go back to the Visible Language Workshop (VLW). Founded in 1975, the VLW became a founding research group at the lab from 1985 until 1994, when the director Muriel Cooper passed away. Processing emerged directly from the Aesthetics + Computation Group (ACG), a research unit set up at the Media Lab by John Maeda in 1996. Maeda’s work at the lab continued to synthesise visual design exploration with emerging software technologies. Within the ACG, Maeda initiated the Design By Numbers (DBN) programming platform, which was released in 1999.

Following this first appearance, he brought the two of us into the project to help maintain and extend it. Many aspects of Processing were modelled after DBN, which also integrated a code editor with a language. DBN was a minimal system: the canvas was always 100 by 100 pixels and only grey values could be used – there was no colour. These constraints, as well as comfortable code elements such as paper and pen, made DBN easy to learn. Our experience with DBN kindled the ambition to begin Processing. We started by extending DBN to include colour and other features, but we soon realised that these limitations were the essence of that platform and it shouldn’t be expanded. Our aim was to devise a system that was as easy to use as DBN, but with a bridge to making more ambitious work. We wanted to allow people to work in colour, at large sizes, to create 3D graphics, and more. Simple Processing sketches are almost as straightforward as DBN sketches, but Processing scales up – it has a “low floor” and a “high ceiling”.

Processing essentials

A Processing program is called a “sketch”. This is more than a change in nomenclature; it’s a different approach to coding. The more traditional method is to resolve the entire plan for the software before the first line of code is written. This approach can be practical for well-defined domains, but when the goal is exploration and invention, it prematurely cuts off possible outcomes. Through sketching with code, unexpected paths are discovered and followed. Because Processing is made for creating pictures, the language includes elements specifically for working with form, colour, geometry, images, etc. The key principle is to facilitate the realisation of simple visual things, but also to allow more experienced programmers to do complicated things within the same language.

The “environment” is the software application within which sketches are written. In the case of Processing it is called the Processing Development Environment (PDE), and its primary function is to make it quick and easy to start writing sketches. The PDE’s secondary purpose is to be used as a sketchbook, a place to save the sketches and provide easy access to open and run them. The PDE can also open and run example sketches and easily link to the Reference. Created for beginners, not everyone uses the PDE for writing sketches. The “community” is the group of people who write Processing sketches and share work and code with each other.
The most essential community contributions to Processing are the libraries. There are over 100 Processing libraries that extend the software in different directions beyond the core. Organised in categories ranging from Data to Simulation to Video & Vision, these libraries are independent pieces of software that integrate into the Processing language. Most libraries are developed by independent community members and the source code and examples are made available for all to use and learn from.

Casey Reas, Level 2 Biohazard , 2013, C-print on FujiFlex SuperGloss, 68.6 × 121.9 cm

The further mission

The original mission of Processing was to create software that would make learning to code accessible for visual people (designers, artists, architects), while also helping a more technical audience to work fluidly with graphics. We aspired to empower people to become software literate – to learn to read and write software. We wanted to change curricula in universities and art schools around the world. Instead of teaching students how to use software, we thought it was just as important to teach them how to create software. The further mission is to make code more accessible to an even wider audience. To achieve this goal, we’re investing our resources in mentoring and collaboration. We believe in the synthesis of the arts and technology, and we know the arts are a necessary part of education from a young age. We don’t want to live in a world where technology is developed without ideas and input from the arts, and where only some people have access to learning to code. We’ve been working on Processing for 20 years now, and it’s difficult to recall what it felt like in 2001. We have an archive of documents and to- do lists, and every change made to the code has been tracked, but this data doesn’t capture the mood or personal impact. Most essentially, Processing is about people, collective learning and exploration. It’s about sharing ideas and giving what you can.

Casey Reas:
An artist and educator, Reas holds a master’s degree from the Massachusetts Institute of Technology in Media Arts and Sciences as well as a bachelor’s degree from the College of Design, Architecture, Art, and Planning at the University of Cincinnati. He is a professor at the University of California, Los Angeles
Ben Fry:
An American expert in data visualisation, Fry received his doctoral degree from the Aesthetics + Computation Group at the MIT Media Lab. He is principal of Fathom, a design and software consultancy located in Boston

Latest on Art

Latest on Domus

Read more
China Germany India Mexico, Central America and Caribbean Sri Lanka Korea icon-camera close icon-comments icon-down-sm icon-download icon-facebook icon-heart icon-heart icon-next-sm icon-next icon-pinterest icon-play icon-plus icon-prev-sm icon-prev Search icon-twitter icon-views icon-instagram