Dan Sandin interview — jonCates, bensyverson, + jon.satrom (criticalartware), 2003
in 2003 i interviewed Dan Sandin with bensyverson && jon.satrom. two years earlier, in 2001, i created && taught Radical Software, Critical Artware. this class at the School of the Art Institute of Chicago was a Software Art studio course. the criticalartware collaboration was born out of that class. my students, having been inspired by my ideas && each others Glitch artworks, joined me in developing new && innovative ways to make && share Media Art Hystories. we worked together, to international acclaim, documenting the first-hand accounts of influential Media Art figures such as Dan Sandin && many others
i began two major archival projects at that time that would bring Digital Art into closer conversation with early Video Art. this interview with Dan Sandin marks an important beginning for me. my Chicago New Media exhibition that i curated at Gallery 400 && as an invited program of Ars Electronica, as well as the COPY-IT-RIGHT Phil Morton Memorial Research Archive that i founded, both begin here. our interview was also the first step in my life-changing relationship with the ever-inspirational Dan Sandin
recorded at the Electronic Visualization Lab of the University of Illinois at Chicago, just over 22 years ago, on April 9th 2003, we then released the interview online publicly via the criticalartware platform as both video && text. im superHappy to share this original research again, with yawl here today, as a free && open educational resource for glitch.school + Media Art Hystories research && teaching — jonCates 2025.04.13
Dan Sandin interview — jonCates, bensyverson, + jon.satrom (criticalartware), 2003.04.09
║▒░║ ║▒░║ ║▒░║ ║▒░║
][►] [▫️] [►] [◄] [▫️] [►] [
║▒░║ ║▒░║ ║▒░║ ║▒░║
since the late 1960’s Dan Sandin has developed artware systems integrating digitial and analog computers, customized circuits, home{brewed|built}-hardware, video games + virtualReality.
Sandin, a professor @ the University of Illinois at Chicago, founded the Electronic Visualization Lab (EVL), created the Sandin Image Processor (I.P.), developed the CAVE virtual reality (VR) system + various other [artware systems/technologies/projects/pieces]. Dan Sandin’s Image Processor (built from 1971–1973) offered artists unprecedented abilities to [create/control/affect/transform] video + audio data, enabling live audio-video performances that literally set the stage for current praxis. open sourcing the plans for the Image Processor as an [artware/system/toolset] Dan Sandin + Phil Morton created the Distribution Religion. as a predecessor to the open source movement, this approach allowed artists to engage with these systems + continues to [interest/inspire] [artisits/developers]. honoring the Distribution Religion, the innovative hystory of the Image Processor + addressing the vibrant potential of these systems criticalartware archived the Distribution Religion, converted the documents to a single PDF file + now releases these plans to the {critical|artware} community.
criticalartware interviews Dan Sandin, [discussing/illuminating] the community + development of the early moments of video art in Chicago, artware, performing live audio-video, virtual reality, open source movements, righteous NTSC outputs, the video revolution + the changes ++ similarities that [bridge/differentiate] then && now. criticalartware freely offers this interview as {text|audio|video} data to be downloaded via the interweb + exchanged as shared cultural resources. (bensyverson, jon.satrom ++ jonCates)
║▒░║ ║▒░║ ║▒░║ ║▒░║
1969.BAK
Dan Sandin: So I’ve been asked to talk about the kind of history of the analog image processor and how it began, but i want to kinda backup one step and talk about an event i did in 1969 with Myron Krueger who wrote Artificial Reality and Jerry Erdman. It was a computer controlled environment where there were pressure pads in the floor that sensed the positions of people, there was a PDP-12, which was a computer, a laboratory instrument computer, designed to monitor and control laboratory instruments, hence it had all this analog IO ([input/output]). That analog IO was used to sense the positions of people and control light displays that were based on phosphors which were charged up and sent around the room in tubes. And also to control a Moog Model 2 synthesizer to control the sound environment.
║▒░║ ║▒░║ ║▒░║ ║▒░║
EARLY VIDEO ART COMMUNITY
Dan Sandin: It’s a little hard to say but certainly a number like 20 or so would miss people. Now at the time, in the early 70s, there really was a video revolution going on and alot of people doing video art and video politics. Alot of that had to do with the realization that T.V. was the dominant communication media of the time, and even more so today. Having access to the means of production for a variety of goals was revolutionary. There were video groups that were interested primarily in community organization and political speech. There were other groups like the one here at the UIC and the School of the Art Institute who were interested in personally expressive art or personal transformation through technology, which was really our idea of it. This was a nation wide community. Chicago was perhaps uniquely technical in the sense that we were very interested in developing these tools for the abstraction of video and doing nonstandard special effects on them.
║▒░║ ║▒░║ ║▒░║ ║▒░║
SHARING ++ ORGANIZING
Dan Sandin: We used to show up at this bar and play tapes. Each of the organizations would organize several video events of various kinds per year. You put 3 or 4 organizations together and that ends up a video event a month to go to and see tapes and meet your friends. It was a very cooperative group, a very sharing group. It kind of had to be because although the equipment in a sense was personally affordable, it was like the cost of automobile for an editing tape recording, and a portapak was like $1000.00. I think back then Volkswagens cost $1200.00 or something like that. You indeed could afford one of the instruments but if you wanted to have a critical mass of instruments, sharing was the operative mechanism so you could do the complete production process.
║▒░║ ║▒░║ ║▒░║ ║▒░║
SANDIN IMAGE PROCESSOR
Dan Sandin: I had been working in this kind of technological art area, so my visual background was in photography; my academic background was in physics. I used to do lots of false color photography, photography that was based on color processing techniques that produced false colors and light modulators like bent pieces of plexiglass, like lightshow in reverse. The same technology is used in lightshows of the same period. It occurred to me, after my experience with the Moog Model 2, which is a modular patch programmable analog sound synthesizer, it occurred to me in a discussion i had with Ross Dobson, we were hanging around and said ‘What would it mean to do the equivalent of a Moog Model 2’?A little bit of thinking made us realize that gains would be like fading in and out, multiplication would be like keying and addition would be like mixing sounds together is like superimposition. So that if one simply took a Moog Model 2 and increased its bandwidth to handle video signals, you would have a very significant processing instrument. Now at the time, the video revolution hadn’t happened yet, so i viewed this as largely a still image processing instrument because i could take pictures with my 35mm camera and i could scan them in a slide scanner effectively and then process those images and then rephotograph the results. That was the context for the original idea.
I left the University of Wisconsin and moved here to the University of Illinois. This was 1969; i was hired to bring computers into the art curriculum of the University of Illinois at Chicago in 1969. Of course there are still people debating whether or not that should be done. After i got here i got interested in video. Part of what interested me was a film called “On/Off” which was called a video film in that it was essentially photographed off of a T.V. screen where the T.V. was being modified in various ways. That struck me. It was a very kinetic and wonderful kind of experience. When the Rover 2 happened with video it occurred to me i wouldn’t have to do this in a T.V. studio or with stills. I could do it with moving images. An individual or a small institution or a group of people could actually own the tools of production, which wasn’t the case before. So then i decided to build the Image Processor as a realtime video processing instrument.
The Moog Model 2 had the control signals were 0 to 10 volts and they were separate form the audio signals which were more or less standard 1 volt line levels. In the analog I.P. control signals and video signals were intercompatible so that any T.V. signal could become a control signal.
I basically built the first version and Phil Morton, in fact had been aware of this as i was building the first version, as it became done he said i want to build a copy. I said you definitely have permission to build a copy, but i actually don’t know if you can build a copy. I didn’t want t be insulting, but i didnt really know what it took, because you’re so immersed in this stuff, in designing and building it, its hard to figure out what resources are actually necessary to do a copy. We talked about it a little bit and he built the first copy. As part of that process of building the first copy we did a documentation that was sufficient so that other people could copy it. It took a year’s worth of Friday afternoons where I’d show up at Phil’s house and we’d work on his I.P. for awhile and he’d produce part of the documentation and I’d work on it. We developed a format and a way of doing it. I wont say it was intensive work every Friday afternoon but it took a year of getting together and fiddling at it. By the end of that we actually accomplished a document which enabled a large number of people. In a period of a few years, 20 or so copies of the I.P. were built. Almost all of them by individuals or small arts institutions. Really not by electronic circuit nerds, in general, but by people who wanted to have access to the power of the tools and were willing to learn what they had to learn to do it. Part of the structure was the first thing they did was to build their test instruments from heath kits.
I arranged for a printed circuit card company to keep the masters and people could order the printed circuit cards from them. What i sent out was a list of all the components and where to get them. And you could literally xerox these things and send them off to Newark and Allied and get the components and then start building.
I literally had images of what i wanted to do with the I.P. I had these things and they were based on educating myself in these photographic process of optical processing stuff and in my own experience. I worked with a company called Lighting Systems Design, LSD. And it had cards with the letters LSD glowed in black light. And we did light shows and i did it for several years. It was great. I really enjoyed it. There are these kinds of lifestyle issues of being able to involve yourself in your art at all sorts of levels. Tom and I did the first technical presentation at the first Siggraph. It was called “Computer Graphics as a Way of Life”. We really meant it that way. We felt that these were tools that could transform your life. You could use it in education. You could use it in the expression of your art. You could use it in the creation of tools. It was a technology that could effect an enormous range of activities. And that’s of course true.
We used to talk about doing the video was a personally transforming process and the video tape was the exhaust. In a sense, I’m still interested in doing that. So you have these visions and you have to extend the tools to execute these visions. And so you do. You do and then the tools extended themselves as only a step in the process. And then of course the process of producing the next vision informs you about doing the tools. And so the practitioner-toolmaker is an important aspect of the game versus a specialist who makes tools and then a user community who has a relatively peripheral influence on those tools. And one of the things that is really wrong with the essentially computer community is that the development of software primarily is for processes that can make alot of money. And artists by and large have bought into the idea of using these tools. So they are only using tools and modalities that were designed for an environment to make lots of money. So the purpose of these tools and the way they’re structured are quite different. Not that you can’t do interesting things with them and there hasn’t been great art made with them and stuff, but from my own personal point of view that strikes me as like a serious problem. The artists, in order to really be advanced, have to be technically competent enough to be able to expand, subvert, change and create their own tools. Of course, that is going on. I mean, that’s not gone. There are still a significant thread of that and a significant thread in Chicago here that i see pop up every once in awhile. And the open source movement… you go to art conferences that are kind of art cross tech conferences, like Version>03 that i was just at in Chicago, and the artists and the technologists have a continuing discussion about open source and ways to kind of change that structure from the artists as consumers of stuff that was meant for other purposes to artists as participants and creators in the technological form itself.
║▒░║ ║▒░║ ║▒░║ ║▒░║
LEARNING ++ BUILDING
Dan Sandin: At that time, everybody had to be their own video technician because it wasn’t specialized. All of the video community probably had a deeper understanding of video than the modern video student does, partially because of the necessity of it. So i would say the entire community was technically savvy. As i mentioned, most of the people that built this weren’t really part of lets say the hobbyist electronics community. But were really people who were interested in video and art and learned the electronics or had friends who were crossovers in some sense and worked together. So in some sense, it is very mush like the open source development that we have today in software. I mean, i made these plans available and for five bucks i would send somebody a copy and five bucks roughly covered xeroxing and mailing costs. There were a bunch of new modules developed by people at other sites. Dick Sipple in Cleveland did a wonderful job of building their copy, physically put together much better than mine, and also added several important modules. And there were a number of people around the country that developed new modules and improvements on modules. They would send me the plans in more or less the same format. And i would kind of add it into the stack and send them out. I remember i sent out something with the original thing saying that you know i suppose that if somebody made alot of money with this i might want some of it back. And then that didn’t feel quite so comfortable. So later on i changed it to i want a good video tape from every I.P. that gets copied. That was even too much for people. Not so much that they were unwilling to do it, but they took it to seriously! I just wanted a video tape from them, but they thought it had to be a magnum opus. I even dropped that. But the important part was it kind of propelled itself. I was not put into managing a company or personally helping every I.P. builder. The fact that people had already built I.P.s could help other people build I.P.s kept me from immediately being put in a maintenance mode or a management mode. So it was very successful.
I was in a unique position in that i was a professor. As far as i was concerned, i was paid by the state to create and disseminate information so i did. That was fine with me. I didn’t feel i needed economic support back. Other people did strategies like trying to start a company and try and sell these things. Not so much with the motivation of making money, although they would like to support themselves, not like the dot com or anything, where they expected to get rich over it, but as a way to make these things available to people. Bill Etra and Steve Rutt did the Rutt-Etra Synthesizer, which was an analog synthesizer very similar to the I.P. in it’s electronic structure, although it was scan processor. It primarily operated with the geometry of the image and my instrument was a video signal processor, essentially a grey value or color processor and primarily played with the grey value domain and not the geometry of the image. The two together made great instruments. He decided the best way to distribute was to make this company and sell these things. And so we had this ongoing contest as to who would be the most successful at distributing stuff. I won by i think 2 instruments or something like that. And he actually worked alot harder at it than i did.
║▒░║ ║▒░║ ║▒░║ ║▒░║
REALTIME FUNCTIONS
Dan Sandin: The idea of actually copying the instrument strikes me as insane. I mean, its a 25, what was it… 1970s… 30 year old creature, 30 year old electronic design, so nobody should be copying that stuff. There are enough pieces around so that museums should just take the original ones and put them together.
There are a number of functioning I.P.s still out there and people are still using them. What i would think you would do now is do it in software. In other words you would take some of these high speed processing cards that are associated with modern personal computers and program it. To me, that would be the rational thing to do.
One of the things about the I.P. that makes it appealing is that modern digital tools are really much clumsier than the I.P. was. Much less tactile. Much less spatial. There you could do 3 or 4 things at once and on a modern computer you would have to do 25 mouse actions to make 1 change. It’s absurd. The human interface in that kind of creature and the ability to learn it like an instrument where it’s physical distribution of space allows you to do things before you… muscle memory does them before you think of them. You just envision the conclusion and your hands do it. Its like, that’s what happens when you play a musical instrument and that kind of relationship with the I.P. was indeed very possible. And there were virtuosos, people who could do completely amazing things.
So the way you developed a piece was through a process of improvisation and rehearsal. I used to make this joke that… there were always these people talking about video art and i remember giving my tapes to some curator and they left the door and i said “you know, that person has seen less video art than Phil Morton rehearses for 1 piece.” and so it was very much that kind of process of producing. It was very very rich in feedback and very rich in kind of self education. You would just kinda put yourself into this instrument and mess around and you would discover stuff and you would learn how to get back there and then you would learn how to sequence those amazing places into some kind of structure.
Video editing was actually a substantial problem at the time and most pieces that i did and Phil Morton did in an important sense were not edited. They were, to a large extent, live performances maybe chunked together in an edit of some sort. You couldn’t… there was no way you could get 2 timebase correctors and 2 tape recorders into the same room. So mixing was out of the question. Now cuts, that was possible. Mixing could be done live with the I.P. but could not be done in post-production.
║▒░║ ║▒░║ ║▒░║ ║▒░║
DIGITAL SYSTEMS
Dan Sandin: I’m not really very familiar with it. I mean, I’ve seen a couple of video performances where people seem to be able to make these things work. I am familiar with the kind of tools that are available in post-production and they’re of course the kind i described but then again you are in a post-production phase. So that’s not disabling, the fact that you can get to the same place and exactly the same effect at exactly time that you typed in has got real advantages. And doesn’t require, in a sense, the kind of musical… its more like composing than it is like playing an instrument. There are real advantages to that kind of approach.
There are live performance instruments. I’ve seen a couple and they kind of seem to work well. But, based on the email lists that i chat with about this stuff, i think that they are not really at the state of evolution that the analog stuff was in the seventies. And they would like to be. And could be. I think it’s certainly possible now to create an instrument that is vastly more powerful and just as interactive and live as the analog I.P. With digital tools. You just aren’t going to do it by stringing a bunch of tools together that were designed for other purposes. It would have to be like a system level ground up design.
║▒░║ ║▒░║ ║▒░║ ║▒░║
VIRTUAL REALITY
Dan Sandin: One of the things that’s nice about VR, is that it doesn’t have a WIMP interface. “Windows, Mice and Pointers” for those who haven’t heard the term before. Not at all pejorative. I’m using more and more video materials, video acquired materials, in my work because the technology is able to be able handle those better and better. Essentially displaying… to do what i want to do i have to display like 2000 by 500 texture maps at a 30 frames a second and that stretches the Onyx system. However, Linux boxes can probably do it. Just haven’t gotten the software yet to work on them.
But at some point, i have some affection to go back and do it. I would do a spatially oriented interface. I would use a bunch of musical instrument style midi controls for it and assign functions and you know where they are and you’ve got lots of knobs and buttons and keys and body sensors and a whole range of input stuff that is very different than what was designed to be the computer model of the office.
║▒░║ ║▒░║ ║▒░║ ║▒░║
REALTIME HYSTORIES
Dan Sandin: One of the things that i think may not be clearly understood, but you can see from 1 of those tapes, is that there is video and computer graphics that Tom DeFanti and i put together very clearly in the early 70’s. Tom DeFanti came in 1973 with an instrument called the GRASS system, which stands for Graphic Symbiosis System. It was a realtime computer graphics system based on calligraphic displays. These are displays that instead of creating a raster in a T.V. format but with higher scan rates; it actually drew things on a phosphor screen and then would redraw them on a phosphor screen. Hence the term calligraphic. It was a realtime instrument. You would draw it once and it would fade away. It was as easy to change every frame, as it was to draw a frame. So it structurally was a realtime medium. The device that did it, the equivalent of the graphics card, was actually an analog computer, the Cadillac of its time. It was equally as good as these Onyxes were 10 years ago. We would actually point the TV camera towards these display screen and then perform on that instrument in realtime, as a combination. So computer graphics and video had actually been together thoroughly in my work for a long time.
The switch over from… there were a bunch of them. First of all there were calligraphic displays and then they became technically too hard to maintain and then the raster graphics came in as the model. But the raster graphics has this problem of stopping computer kinetics, motion, computer graphics and everything became computer animation because memories weren’t big enough and computers weren’t fast enough to create any of this stuff in realtime. Computer graphics stopped moving. I stopped being interested. But there was one segment of the community where things still moved and that was video games. Now there were little space roaches moving around on the screen, not the whole screen, little pieces of the screen could move around because computers were up to that. Tom and I worked with video game manufacturers and essentially designed equipment for artists that were based on the video game technology of the time. I designed these computers to have righteous NTSC video output so that these could be recorded and these were meant for cable TV markets or maybe small TV stations. It was never a great economic success but it was, like the analog I.P. it had a large community of very committed users that produced alot of great work with these instruments.
Then in about 1987 or so, when the personal IRIS came out, the kind of geometry engine machine that was designed to do realtime computer graphics again, i remember looking at the screen and saying “Oh my god, this stuff moves. I could be interested in this again.” i started looking at that in a serious way. I had diverted into animation and the video game stuff and into a couple of other 3D mediums like PHSColograms and stuff. Basically i was out of the motion graphics business because animation was too much like work. But then this stuff moved so i started to focus that energy. In 1990 basically i kinda decided that virtual reality was really the interesting thing to do. And i went around and talked to the various Virtual Reality inventors and creators. Then Tom and I sat around deciding how to do it different. And we did. A few years later the CAVE popped out which became one of the dominant forms of Virtual Reality.
║▒░║ ║▒░║ ║▒░║ ║▒░║
NAMING.SYS
Dan Sandin: Coming to the name we had other things like Pocket Cathedral, Interactive Mosque and Video Shrine. All these names were part of this process… and the it kinda feels like a cave and all of that stuff…
Yggdrasil is the tree of life in Norse mythology. Everybody calls it “YG”. The creator Dave Pape thinks it should be called “ygg”. He never intended it to be an acronym, YG. It’s supposed to be “Ygg”. But he did a system before that called XP so that it was very hard for the community to switch over to “Ygg”. That’s entirely Dave’s. Dave was a graduate student here and YG, excuse me “Ygg”, is his thesis. There is a very significant community of people operating with “Ygg” around the world. It is a scripting language. You have to understand the idea of a scene graph and understand hierarchical relationships. You can write your own nodes and extend the capabilities.
║▒░║ ║▒░║ ║▒░║ ║▒░║
ARTWORLDS ++ ARTWARES
Dan Sandin: Distributing this technology before the art establishment is ready to deal with it is a constant problem. Although now the art establishment can deal with the kind of work we were doing in the 70’s completely competently. It really took them 20 or 30 years to figure out that it’s relevant. It certainly is the case that the stuff that i did 20 to 30 years ago is shown in museums today much much more frequently than the work I’ve done in the last decade. By the time the museums were ready to deal with it, i was on to other things. In Europe and Japan there is a much better understanding of media arts as central to art production in now the 21rst century. They have real museums. The Ars Electronica center is a real live multi-floor museum that was captured by some very visionary people. It was going to be a modern art museum but it became a media oriented museum and it’s really out there. And there’s a couple of others in Germany and there’s the ICC in Tokyo. These things have like permanent installations. They have traveling shows. The United States has been particularly retrograde and continues to be kind of backwards on this stuff. It just means that the United States is going to lose it’s preeminence in the artworld that it has been able to maintain for the last number of years by having been forward looking at the beginning of the century. And now it isn’t.
But there are always opportunities. Siggraph was and is a continuing opportunity. ISEA is a good opportunity. There are these episodic shows. And now LA County Museum and a couple of other museums are doing a show a year in this area. They tend to still play it pretty safe and are connecting up with the older versions of this stuff. But there is progress on it. By and large the artworld is irrelevant to these kinds of technologies I’m talking about, to these kinds of communications. It has chosen to be irrelevant and is staying irrelevant and is just no longer a significant player in the game. It could be again but you look at things like video tape… they really have not been able to deal with video tape. In the 70’s most of the Video Art was on video tape. What has survived is installations. And that makes sense. The museum actually has a place there. It has a physical place you can setup these things. People come in and look at them. They are meant to kind of walk by and stare at as long as you want. It kind of fits what the museum can do. You can imagine that museums have things that are like the print room that have some of the museums, like the Walker, have excellent archives of tapes. If there is a tape, they own it, you want to see you can show up and look at it. And that’s great. And that’s an important role. But of course, all of this stuff should be on the web. People should be able to view it at various quality levels and either pay for or not depending on what models you want to try distribution on. So some of this stuff should be on this website that is next to me. Maybe there is a picture of the I.P. over here maybe and I don’t know, maybe a picture of the GRASS machine here. Who knows…
║▒░║ ║▒░║ ║▒░║ ║▒░║
][►] [▫️] [►] [◄] [▫️] [►] [