Mueller (Report + Manafort Response) Redactions Printed With Invisible Ink

In 2019, a heavily-redacted version of the Mueller Report was made public. On the thirteenth of May this year, another heavily-redacted addendum was made public as part of the Paul Manafort sentencing, the Mueller Manafort Response.

If you take both of these documents, add them together, and shrunk every page down such that 486 pages fit on a 44 x 30 inch sheet of paper, it’d look like this. Reading left to right, row one to two, etc.

That big black box on the third to last row is what one page looks like, a pseudo registration mark. It’s a document break between the report and the response documents.

On visual inspection, of immediate note is the sheer number of black bars. What shade of gray is this? Maybe 20%, 40%? The Department of Justice very liberally blacked out and censored any text that it, on review, would do HARM TO ONGOING MATTER (often abbreviated as H.O.M), or PERSONNEL PRIVACY, among other reasons. These redactions make key parts of this document completely incomprehensible, especially anything to do with Russia’s Internet Research Agency and past and present activities on USA-based Social Media Platforms in and after the 2016 US Presidential Election.

If you then removed all the text from those aggregated pages, and just looked at redaction blocks, it’d look like this:

It takes a village folks.

Profound thanks to

Alex Thompson at Pagoda Arts, Emily York and Courtney Sennish at Crown Point Press, Kate Randall (invisible ink, burn art history, burn safety), Ian Roxbourough (combustion)

We stand on the shoulders of giants

Agnes Martin, John Cage(1, 2), Nam June Paik, Birgit Skiƶld, Nance O’Banion, Jenny Holzer, Cai Guo-Qiang

Visual ChangeLog as Stele

visual_change_04.01.a visual_change_04.01.b visual_change_04.02.a visual_change_04.02.b visual_change_04.03.a visual_change_04.03.b visual_change_04.04.a visual_change_04.04.b visual_change_04.05.a visual_change_04.05.b

Tumbleweed Prototypes

tumbleweed-1

Tumbleweed Work in Progress, Version 1

2015-07-24

Earth Day Actions

Ā Historical Context

Earth Day. First on March 21, 1970. Now “coordinated’ via Earth Day Network on April 22 annually.

See: A Brief History of Earth Day, or the longer American Experience piece on PBS Earth Days (or via YouTube).

Instantiated in San Francisco via Earth Day SF Street Fair. This event is April 18th, 2015 on 22nd street between Mission and Valencia streets.

 

Art Context

Yes Men hijinks including “Canada turns over a new leaf,” “Balls Across America,” and media jamming in La Jolla Action. Think of way to involve media.

 


David Ireland, Sidewalk Repair, 500 Capp Street, San Francisco, 1976. Conceive of something that is a maintenance action.

 

Wheat6

Agnes Denes, Wheatfield – A Confrontation, 1982. Is there a way to transmute the urban materials of concrete, sidewalks, into something green?

 

Site Context

Situated on the public sidewalk space and streets of the Mission District, in San Francisco, CA.

Public space is currently contested in this area. One of the main parks is half closed, and tensions between longtime and new residents, between residents and visitors, and between residents and tourists mix in unpredictable and often antagonistic ways.

Dolores Park is half closed due to a multi-year park improvement project, and has been the origin for clashes between under-staffed park rangers and partying parkgoers. The neighbors immediately surrounding the park are concerned with late-night vandalism, and the mounds of trash left by partying parkgoers every weekend.

Public trash piles and sidewalk litter are exacerbated by the removal of public trash cans on San Francisco city streets in 2007 by the Board of Supervisors and Gavin Newsom. A summary quote from the linked-to article:

ā€œBefore, there used to be a container on every street,ā€ said Salvador RomĆ”n, the janitor at La VictĆ³ria bakery on 24th and Alabama. ā€œNow there is one every two blocks.ā€

 

Possible Actions

Goal: Choose an approach that movesĀ your work in a new trajectory and deepens your experimentation.Ā What kind of public might the work produce?Ā Take into account your mode of address and tactics, considering time and site, the political moment, andĀ opportunities for solidarity, amelioration, agonism, antagonism, etc. Have a working Plan B prepared. — A.B.

A Plan: Each of the participants will enact sidewalk sweeping, trash removal, and or mobile trashcan-being as a counter-performance to the EDN Earth Day Street Fair. We target Capp Street, 15th to 20th from 11am to 3pm. Each participant will wear an identifiable uniform or costume, wear gloves, and carry a broom. (??) We will document this action with still photography and perhaps some gopro footage.

B Plan: We attend the Earth Day Network event on 22nd street starting at 3pm, and document how the actual event differs from our perception and conflict with Earth Day.

Complicating Distribution Plan: We will try to use the documentation of the fact on social media, to use the Yes Men methodology to give the media an in to a longstanding greivance. Can we target a supervisor for the Mission or a neighborhood group? Can we do this in other venues? Depends on the reaction.

Buzzfeed whiteboard, oops

436 Capp Street, Day Forty Nine, Second Sign, One Tag, One Sold Sticker, Two Speculation Stickers, One Eviction Sticker, Some scratches.

warhol “outer and inner space” 1965 + facerec augmentation v2

Fort Point

Fort Point

All the Uhuras

all_the_uhuras_LR_1800-3k

all_the_uhuras_C_1803-3k

All the Left/Right Uhuras, 2014
All the Center Uhuras, 2014

68 cm x 86.5 cm,Ā Inkjet overĀ lapis and silvertone wash on Awagami Bamboo 250 gsm paper. Master jedi paper tricks viaĀ Emily York.

These prints are composed of 288 cropped images of Uhura from the television show Star Trek. Each frame of the first season is analyzed with facial recognition software, and found Uhura faces areĀ either inscribedĀ with tattoo-like circles representing individual facial detection algorithms, or scaled, cropped, and center-aligned via sophisticated image-processing routines.Ā 

To offset the explicitly computed nature of this work, the images are aligned on broken grids, and floated on an organic background of silvertone metallic or lapis mineralĀ pigments.

Presented Without Comment

Presented Without Comment

GVOD + Analytics: Star Treks \\///

vlcsnap-2014-02-19-17h20m22s182

vlcsnap-2014-02-19-17h16m19s57

This is a fan studies and media assemblage experiment, loosely associated with Professor Abigail De Kosnik’s Fan Data/Net Difference Project at the Berkeley Center for New Media. It uses technology associated with copyright verification and the surveillance state to desconstruct serial television into a hybrid media form.

The motivating question for this work is simple. How does one quantize serial television? Given a television episode,Ā  such as the third episode of Star Trek, how can it be measured and then compared to other episodes of Star Trek? Can characters of the original Star Trek television series be compared to characters in different Star Trek universes and franchises, such as comparing Kirk/Spock in Star Trek to Janeway/Seven-of-Nine in Star Trek Voyager? Given a media text, how do you tag and score it? If you cannot score the whole text, can you score a character or characters? How do characters or elements of a media text become countable?

Media Texts:

Star Trek (The Original Series), aka TOS, 1966 to 1969. Episodes: 79. Running time each: 50 minutes. English subtitles from subscene for Seasons 1, 2, 3.

Star Trek Voyager, aka VOY, 1995 to 2001. Episodes: S03E26 to S07E26, ie #68 to #172, a total of 104. Running time each varies between 45 and 46 minutes.

Media Focus/Themes:

The pairs of Kirk/Spock in Star Trek the Original Series and Janeway/Seven of Nine in Star Trek Voyager will be compared in a media-analytic fashion.

A popular fanfic genre is called One True Pairing, aka OTP, which is a perceived or invented romantic relationship between two characters. One of the best known examples of OTP is the pair of Kirk and Spock on TOS. Indeed, fanfic involving Kirk and Spock is so popular to have its own nomenclature, and is called slash, or slash fic.

The pair of Janeway and Seven of Nine are comparable to Kirk and Spock as both the Janeway and Kirk characters are captains of space ships, and both the Seven of Nine and Spock characters are presented as “the other” to human characters: both the borg and vulcans are presented as otherworldly, non-human. The two pairs are different in other areas, the most obvious being gender: K/S is male, J/7 is female.

Some edit tapes for K/S can be found on YouTube for Seasons 1, 2, and 3. Some fanvids for J/7.

Open Questions:

This is a meta-vidding tool with an analytic overlay. It takes serial television shows and adds facial recognition to count face time and change the focus of viewing to specific character pairs instead of entire episodes. Developing the technology to answer these analytic questions, answering and understanding the answers, and formulating the next round of questions is the purpose of this project.

1. Should the method be the first 79 episodes that the character-pairs are together? How do you normalize the series and pairs?

Or minute-normalized, after the edits? The current times are:

TOS == 79 x 50 minutes == 3950 “character-pair” minutes total

VOY == 104 x 43 minutes == 4472 “character-pair” minutes total

2. Best method for facial recognition.

One idea is to use openframeworks, and incorporate an addon. Get FaceTracker library. See video explaining it. Get ofxFaceTracker addon for openframeworks.

Another is to use opencv directly.

OpenCv documentation main page.

Tutorial: Object detection with cascade classifiers.

User guide: Cascade Classifier Training.

Contrib/Experimental: Face Recognition with OpenCV. See the cv::FaceRecognizer class.

Many, many variants go into this. Some good links:

Samuel Molinari, Peopleā€™s Control Over Their Own Image in
Photos on Social Networks, 2012-05-08

Aligning Faces in C++

Tutorial: OpenCV haartraining Naotoshi Seo

Notes on traincascades parameters

Recommended values for detecting

 

3

ffmpeg concat

LBP and Facial Recognition Example with Obama

Simple Face recognition using OpenCV, Etienne Membrives, The Pebibyte

 

IEEE Xplore: Face detection, pose estimation, landmark localization in the wild, 2012

Xiangxin Zhu, Ramanan, D

 

 

3. Measuring “character” and “character-pair” screen time. How is this related to the bechdel test? [2+ women, who talk to each other, about something besides a man] Can be this used to visualize it or flaws as currently conceived? What is bechdel version 2.0? [2+ women, who talk to each other, about something besides a man, or kids, or family] Can we use this tool to develop new forms?

4. How to auto-tag? How to populate the details of each scene in a tagged format? If original sources have subtitles, is there a way to dump the subs to SRT, and then populate the body of the wordpress with the transcript? Or, is there a way to use google’s transcription API’s to try and upload/subtitle/rip?

5. Can the netflix taxonomy be replicated? Given the patents, can some other organization scheme be devised?

Methodology:

0. Prerequisites

Software/hardware base is: Linux (Fedora 20) on Intel x86_64, using Nvidia graphics hardware and software. Ie, contemporary rendering and film production workstation.

Additional software is required on top of this base. For instance, g++ development environment, ffmpeg, opencv, openframeworks.080.

Make sure opencv is installed.

yum install -y opencv opencv-core opencv-devel opencv-python opencv-devel-docs

See OpenCV Configuration and Optimization Notes for more information about speeding up OpenCV on fedora.

1. Digitize selected episodes for processing with digital tools

Decrypt via makemkv. Compress to 3k constant rip with HandBrake.

Using 720p version of TOS in matroska media container. Downloaded SRT subtitles from fan sites. Media ends up being: 960×720, 24 frames a second.

2. Quantize each episode to a select number of frames.

Make sure ffmpeg is installed.

yum install -y ffmpeg ffmpeg-devel ffmpeg-libs

 

Sample math as follows. Assume a fifty minute show has 24 frames a second. That is:

50 minutes x 60 seconds in a minute x 24 frames a second == 72k total frames an episode.

Assuming a one-frame-a-second sample resolution gives 3k frames for the total set of frames in TOS episode one. Use ffmpeg to create a thumbnail image ever X seconds of video. And set to one image every second.

Via:

TMPDIR=tmp-1
mkdir $TMPDIR;
ffmpeg -i $1 -f image2 -vf fps=fps=1 ${TMPDIR}/out%4d.png;

3. Sort through frames and set aside twelve frames of Kirk faces, twelve frames of Spock faces.

This is used later, to train the facial recognition. Note: you definitely need hundreds and even thousands of positive samples for faces. In the case of faces you should consider all the race and age groups, emotions and perhaps beard styles.

For example, meet the Kirks.

And here are the Spocks.

In addition, this technique requires a negative set of images. These are images that are from the media source, but do not contain any of the faces that are going to be recognized. These are used to train the facial recognizer. Meet the non-K/S-faces.

4. Seed facial recognition with faces to recognize. Scan frames with facial recognition according to some input and expected result algorithm, and come up with edit lists that can be used to frames that are relevant to the character-pair.

Need either timecode or some other measure that can be dumped with an edit decision list or specific timecode marks. Some persistent data structure? Edits made.

5. Decompose episode into character-pair edit vids.

Use edit decision list or specific timecode marks, as above. Automate ffmpeg to make edits.

6. Store in wordpress container, one post per edit vid? Then with another post, tie together all of a single episode edit vids into one linked post?

Legal

There are both copyright risks and patent opportunities in this line of inquiry.

Production Notes:

Further
cinemetrics
How Netflix Reverse Engineered Hollywood, Alexis Madrigal, The Atlantic, 2014-01-02

Marin, Rodeo Beach.

Marin, Rodeo Beach.

“Apparently every corner on Capp Street used to have these stamps.”

Here’s a great link for WPA activities in SF in the 1930s.

“Apparently every corner on Capp Street used to have these stamps.”

Here’s a great link for WPA activities in SF in the 1930s.

(via http://www.youtube.com/attribution_link?a=IANOco2NzpQ&u=/watch?v=vwJJjKxmmTI&feature=share)

(via http://www.youtube.com/attribution_link?a=IANOco2NzpQ&u=/watch?v=vwJJjKxmmTI&feature=share)