In 2019, a heavily-redacted version of the Mueller Report was made public. On the thirteenth of May this year, another heavily-redacted addendum was made public as part of the Paul Manafort sentencing, the Mueller Manafort Response.
If you take both of these documents, add them together, and shrunk every page down such that 486 pages fit on a 44 x 30 inch sheet of paper, it’d look like this. Reading left to right, row one to two, etc.
That big black box on the third to last row is what one page looks like, a pseudo registration mark. It’s a document break between the report and the response documents.
On visual inspection, of immediate note is the sheer number of black bars. What shade of gray is this? Maybe 20%, 40%? The Department of Justice very liberally blacked out and censored any text that it, on review, would do HARM TO ONGOING MATTER (often abbreviated as H.O.M), or PERSONNEL PRIVACY, among other reasons. These redactions make key parts of this document completely incomprehensible, especially anything to do with Russia’s Internet Research Agency and past and present activities on USA-based Social Media Platforms in and after the 2016 US Presidential Election.
If you then removed all the text from those aggregated pages, and just looked at redaction blocks, it’d look like this:
It takes a village folks.
Profound thanks to
Alex Thompson at Pagoda Arts, Emily York and Courtney Sennish at Crown Point Press, Kate Randall (invisible ink, burn art history, burn safety), Ian Roxbourough (combustion)
David Ireland, Sidewalk Repair, 500 Capp Street, San Francisco, 1976. Conceive of something that is a maintenance action.
Agnes Denes, Wheatfield – A Confrontation, 1982. Is there a way to transmute the urban materials of concrete, sidewalks, into something green?
Site Context
Situated on the public sidewalk space and streets of the Mission District, in San Francisco, CA.
Public space is currently contested in this area. One of the main parks is half closed, and tensions between longtime and new residents, between residents and visitors, and between residents and tourists mix in unpredictable and often antagonistic ways.
Dolores Park is half closed due to a multi-year park improvement project, and has been the origin for clashes between under-staffed park rangers and partying parkgoers. The neighbors immediately surrounding the park are concerned with late-night vandalism, and the mounds of trash left by partying parkgoers every weekend.
Public trash piles and sidewalk litter are exacerbated by the removal of public trash cans on San Francisco city streets in 2007 by the Board of Supervisors and Gavin Newsom. A summary quote from the linked-to article:
āBefore, there used to be a container on every street,ā said Salvador RomĆ”n, the janitor at La VictĆ³ria bakery on 24th and Alabama. āNow there is one every two blocks.ā
Possible Actions
Goal: Choose an approach that movesĀ your work in a new trajectory and deepens your experimentation.Ā What kind of public might the work produce?Ā Take into account your mode of address and tactics, considering time and site, the political moment, andĀ opportunities for solidarity, amelioration, agonism, antagonism, etc. Have a working Plan B prepared. — A.B.
A Plan: Each of the participants will enact sidewalk sweeping, trash removal, and or mobile trashcan-being as a counter-performance to the EDN Earth Day Street Fair. We target Capp Street, 15th to 20th from 11am to 3pm. Each participant will wear an identifiable uniform or costume, wear gloves, and carry a broom. (??) We will document this action with still photography and perhaps some gopro footage.
B Plan: We attend the Earth Day Network event on 22nd street starting at 3pm, and document how the actual event differs from our perception and conflict with Earth Day.
Complicating Distribution Plan: We will try to use the documentation of the fact on social media, to use the Yes Men methodology to give the media an in to a longstanding greivance. Can we target a supervisor for the Mission or a neighborhood group? Can we do this in other venues? Depends on the reaction.
All the Left/Right Uhuras, 2014 All the Center Uhuras, 2014
68 cm x 86.5 cm,Ā Inkjet overĀ lapis and silvertone wash on Awagami Bamboo 250 gsm paper. Master jedi paper tricks viaĀ Emily York.
These prints are composed of 288 cropped images of Uhura from the television show Star Trek. Each frame of the first season is analyzed with facial recognition software, and found Uhura faces areĀ either inscribedĀ with tattoo-like circles representing individual facial detection algorithms, or scaled, cropped, and center-aligned via sophisticated image-processing routines.Ā
To offset the explicitly computed nature of this work, the images are aligned on broken grids, and floated on an organic background of silvertone metallic or lapis mineralĀ pigments.
Two 1080p LED televisions, 2 8GB SD Cards, video @ 960 x 720 pixel, loops of 3:00:00 and 2:00:00 hours.
Ā
ONE Tens, hundreds, thousands. How many photographs doĀ you look at every day? Do you swipe through photographs on a smart phone, scroll through images on Tumblr, scrub through framesĀ on Netflix? More images per day than last year? Two years ago? Ten years ago? I wanted to explore an environment of extreme multiplicity. Thousands, tens of thousands of images. I wanted to create and manipulate images in pre-determined ways, but using image mass-generationĀ techniques. Metadata is a by-product of the mass manufacture and processing of images,Ā Ā and exploring the oceans of possible patternsĀ created as a by-product of a machine counting thousands of images had great appeal. Like generating waves with a mechanical device in a wave pool, this would be a super-human way of counting, with very clear rules about the perception of the image: detection, perception codified. I picked facial recognition algorithms as my art tool, and use them to count faces.
Ā
Uhura Audit 23 Details S01E04 frame 1391: All Faces, All Eyes, All Lips, 2014
3 x 960 x 720 pixels
Ā
TWO The appeal of All Eyes, Multiple Eyes, Fanciful Eyes. The three augmented frames above comprise a sample from the 1391th second of Star Trek TOS, the 4th episode of the first season (S01E04). As part of the initial processing and detection of objects, each frame is scanned for face-like objects, then for eye-like objects, and then for lip-like objects. The first frame is “All Faces,”Ā which is comprised of 5 algorithms run up to 3 different ways. The second frame is “All Eyes,” which is comprised of 7 algorithms, all run the same way. The last is “All Lips,” which is generated by 2 algorithms, also all run in a similar manner. Note that “All Eyes” is generated by running the algorithm over the entire frame, instead of over just the detected face area. I am consciously mis-using this algorithm to generate ornamentation. This is of dubious utility for anything other than generating pictures that dazzle: generated virtual tattoos, a lot of pixel ink spread around to establish identity.
Ā
A Few Frames from SpockĀ Audit 23 Results, 2014
4 x 960 x 720 pixels
Ā
THREE Spock Eyebrows and Cat-Eye Makeup Considered Harmful. The frames above show a face detection mistake that heavily impacts efforts to use human-face detection algorithms on the non-human face of the character Spock. In the Star Trek universe, Spock is a hybrid human: half human, half alien (Vulcan). For some reason, the green box that outlines the boundary of the detected Spock face in each of the images above truncates the face at or above the lips. I speculate that this augmentation pattern is caused by the heavily arched eyebrows of Spock being mistaken for prominent cheek bones, effectively spiking the recognized facial region upwards, clipping the mouth. Looking at different episodes of the series, I see this same mis-detection replicated in Uhura frames with extreme cat-eye makeup.
Ā
Git Wins, 2014
g+ post, text
Ā
FOUR Reflections on Process. This exploration started as a generative art project targeting media. Little did I know that I’d end up buying my first television sets as part of the development process, and lose myself into theĀ void of a consciously computed image environment. Intermittently back from the void, let me report on what I’ve found from time spent doing art and science at the same. I see a lot of engineering processes: a strangely elliptical 10-13 day project management schedule for keeping on track of technical tasks, source code versioning, a lot of linux system-level administration and tuning. Tens and tens of thousands of lines of C++11 and one hot processor, the requisite kludgy bash scripts, and gigabytes of image files nestled in directories by the six thousand. I started inserting art tasks into the project management scheme, forcing visual priorities to become explicit. As an art process, the manipulation of thousands of frames, picking apart video and putting it back together has given some interesting insights into Star Trek. I’ve noticed certain editing patterns previously invisible, the favoring of certain sides of characters by the camera, and that counting the characters frames changes the character’s meta-narrative in my head. All these formalist film analysis, done by algorithm.
Ā
Sort 4 Negatives All Faces, 2014
960 x 720, 8:28 minutes
Ā
FIVE Faceless, Negatives, and the Fleeting Form. In addition to finding faces, one can use face detection to assure that there is no face, to detect nothing. That’s what these frames are, a moody collection of people walking through closing doors, turning corners, of machines levitating through space, space ships cruising the galaxy, of cropped hands, the backs of heads, murky faces in the shadows. It turns out that along with the rise of face detection, and the rise in the capabilities in easy-to-use commercial forms like Facebook photo-tagging, is a concurrent rise in the desire for explicit face removal. Search the #faceless tag on the social media platform of your choice for a peek into this particularly interesting data set.
Ā
A Few Frames from KirkĀ Audit 16 Results, 2014
3 x 960 x 720 pixels
Ā
SIX Kirk Dimples Considered Harmful. Another mistaken detection, particular to Kirk: the deep chin cleft is a shape that eye detectors can consider eyes. Proper tuning and some additional computation fixes this issue, but the mind reels. Human faces are mostly symmetric, but filled with parts that make them non-uniform. There is no recognizer for scars, dimples, or birth marks.
Ā
Select Frames from Audit 16 Results, 2014
4 x 960 x 720 pixels
Ā
SEVEN Computers Recognizing Computers. Both Uhura and Spock characters are often seated in front of bank of blinking lights, a typical 1960’s imagineering of technology. By chance, the relative size and positions of the lights comprising the background technology, and the profile nature of the seated character, combine to confuse the face recognizer algorithms.
Ā
126Ā Uhuras as Seen on TV, 2014
17 x 45″, inkjet on two sheets of Awagami paper, space
Ā
EIGHT The rise of Uhura. Before I started this project, my favorite Star Trek character was Spock. Now it’s Uhura. Of the three Star Trek characters I’m stalking with face recognition, Uhura has the least number of good samples. At this point, there are 2909 Kirk positive samples, 1838 Spock samples, and 288 Uhura samples. Just as a point of reference, there are 351 title frames in an equivalent sample of Star Trek frames. That thereĀ are less solo-Uhura frames in a typical Star TrekĀ episode thanĀ title-credit-end-credit sequence frames can be explicitly quantified. I catch myselfĀ constantly scheming to figure out edits and algorithms that will give her more screen time, retro-actively. I triple-count the Uhura frames asĀ I count all her eyes, all her faces, all her lips. Computer, tell me about gender and race in 1966-69 USA.
Ā
Ā
Sort 4 All Grid UhurasĀ Waterfall,Ā 2014
960 x 720, 48Ā seconds
Ā
Miscellaneous Augmentation Keys, 2014
svg files, text
Ā
NINE Compression, augmentation, visualization. A part of this project involved running every single face, eye, and mouth recognizer over a “frame” of Star Trek episode. To get an idea how the algorithms were performing in my setup, I created these color-coded augmentation schemes that allowed me to look at the results of algorithm, directly applied on the original frame. Each frame is then smashed up against another in a compressed form, and watched on a screen or projected against a wall. That’s what these video clips encode: first detected faces, and then later recognized characters and specific character interactions.
Ā
audit version: 23f 2014-07-08
kirk samples: 2909
kirk samples with detected faces: 2780
front faces: 1081
(h x w) Max 527 Median 312 Min 134
(faces <= size) 1025 <= 400 610 <= 320 168 <= 240
profile faces: 639
(h x w) Max 502 Median 317 Min 123
(faces <= size) 582 <= 400 330 <= 320 83 <= 240
tos-101-0130
face: 294 162 407 407
eye 1: [82 x 82 from (83, 117)]
lips: 3
[161 x 97 from (113, 301)]
[155 x 93 from (60, 117)]
[144 x 86 from (229, 129)]
Kirk Audit 23 Results, 2014
c++ 11, math, text
Ā
TEN Rule-based Perception. Humans see faces, and don’t consciously count eye offsets, have to think about detecting mouths, or consider that a sufficiently cleft chin can trick a common eye recognizer. Humans look at faces and see friends, family, stereotypes and simplifications, other people. Computers look at faces and calculate eye position, estimate posture, automatically center-align eyes and re-scale the entire face to fit. Faces are calculated and enumerated: parts of the software that created this piece detect faces in an algorithmic and speculative fashion: checks that two eyes are detected with the face boundary, that the detected eyes are about the same size, that there is some horizontal spacing between the eyes, and that the vertical distance between the eyesĀ is not so extreme as to indicate failure. Humans see faces. Computers see points of interest in a region specified as interesting. Teaching a computer to see like humans therefore involves moments where the humans decide that the picture in front of you is a full-frontal face when it has no more than 15% deviation from an imaginary nose plane. That both ears are visible. When everything on a face is measured and quantified, there is so much more information, and with it the corresponding ability to both recognize and mis-interpret. Make no mistake, soon computers will recognize more faces and emotions than a sleep-deprived parent, or a tired law enforcement officer. And then, what?
Ā
Ambient
Sol LeWitt Artist’s Books, Corraini Edizioni, 2010
This is a fan studies and media assemblage experiment, loosely associated with Professor Abigail De Kosnik’s Fan Data/Net Difference Project at the Berkeley Center for New Media. It uses technology associated with copyright verification and the surveillance state to desconstruct serial television into a hybrid media form.
The motivating question for this work is simple. How does one quantize serial television? Given a television episode,Ā such as the third episode of Star Trek, how can it be measured and then compared to other episodes of Star Trek? Can characters of the original Star Trek television series be compared to characters in different Star Trek universes and franchises, such as comparing Kirk/Spock in Star Trek to Janeway/Seven-of-Nine in Star Trek Voyager? Given a media text, how do you tag and score it? If you cannot score the whole text, can you score a character or characters? How do characters or elements of a media text become countable?
Media Texts:
Star Trek (The Original Series), aka TOS, 1966 to 1969. Episodes: 79. Running time each: 50 minutes. English subtitles from subscene for Seasons 1, 2, 3.
Star Trek Voyager, aka VOY, 1995 to 2001. Episodes: S03E26 to S07E26, ie #68 to #172, a total of 104. Running time each varies between 45 and 46 minutes.
Media Focus/Themes:
The pairs of Kirk/Spock in Star Trek the Original Series and Janeway/Seven of Nine in Star Trek Voyager will be compared in a media-analytic fashion.
James Kirk
Kathryn Janeway
Spock
Seven of Nine
A popular fanfic genre is called One True Pairing, aka OTP, which is a perceived or invented romantic relationship between two characters. One of the best known examples of OTP is the pair of Kirk and Spock on TOS. Indeed, fanfic involving Kirk and Spock is so popular to have its own nomenclature, and is called slash, or slash fic.
The pair of Janeway and Seven of Nine are comparable to Kirk and Spock as both the Janeway and Kirk characters are captains of space ships, and both the Seven of Nine and Spock characters are presented as “the other” to human characters: both the borg and vulcans are presented as otherworldly, non-human. The two pairs are different in other areas, the most obvious being gender: K/S is male, J/7 is female.
Some edit tapes for K/S can be found on YouTube for Seasons 1, 2, and 3. Some fanvids for J/7.
Open Questions:
This is a meta-vidding tool with an analytic overlay. It takes serial television shows and adds facial recognition to count face time and change the focus of viewing to specific character pairs instead of entire episodes. Developing the technology to answer these analytic questions, answering and understanding the answers, and formulating the next round of questions is the purpose of this project.
1. Should the method be the first 79 episodes that the character-pairs are together? How do you normalize the series and pairs?
Or minute-normalized, after the edits? The current times are:
TOS == 79 x 50 minutes == 3950 “character-pair” minutes total
VOY == 104 x 43 minutes == 4472 “character-pair” minutes total
2. Best method for facial recognition.
One idea is to use openframeworks, and incorporate an addon. Get FaceTracker library. See video explaining it. Get ofxFaceTracker addon for openframeworks.
3. Measuring “character” and “character-pair” screen time. How is this related to the bechdel test? [2+ women, who talk to each other, about something besides a man] Can be this used to visualize it or flaws as currently conceived? What is bechdel version 2.0? [2+ women, who talk to each other, about something besides a man, or kids, or family] Can we use this tool to develop new forms?
4. How to auto-tag? How to populate the details of each scene in a tagged format? If original sources have subtitles, is there a way to dump the subs to SRT, and then populate the body of the wordpress with the transcript? Or, is there a way to use google’s transcription API’s to try and upload/subtitle/rip?
5. Can the netflix taxonomy be replicated? Given the patents, can some other organization scheme be devised?
Methodology:
0. Prerequisites
Software/hardware base is: Linux (Fedora 20) on Intel x86_64, using Nvidia graphics hardware and software. Ie, contemporary rendering and film production workstation.
Additional software is required on top of this base. For instance, g++ development environment, ffmpeg, opencv, openframeworks.080.
3. Sort through frames and set aside twelve frames of Kirk faces, twelve frames of Spock faces.
This is used later, to train the facial recognition. Note: you definitely need hundreds and even thousands of positive samples for faces. In the case of faces you should consider all the race and age groups, emotions and perhaps beard styles.
For example, meet the Kirks.
And here are the Spocks.
In addition, this technique requires a negative set of images. These are images that are from the media source, but do not contain any of the faces that are going to be recognized. These are used to train the facial recognizer. Meet the non-K/S-faces.
4. Seed facial recognition with faces to recognize. Scan frames with facial recognition according to some input and expected result algorithm, and come up with edit lists that can be used to frames that are relevant to the character-pair.
Need either timecode or some other measure that can be dumped with an edit decision list or specific timecode marks. Some persistent data structure? Edits made.
5. Decompose episode into character-pair edit vids.
Use edit decision list or specific timecode marks, as above. Automate ffmpeg to make edits.
6. Store in wordpress container, one post per edit vid? Then with another post, tie together all of a single episode edit vids into one linked post?
Legal
There are both copyright risks and patent opportunities in this line of inquiry.