This is a personal blog, please observe Think Hour for ad hoc quasi-diary writings and Big Ideas for my ratified longform items. I will see you there.


2020-09-05


Let me speak my piece. Let’s go, give me five. Alright…

In The Shade // The Book of Shaders

Shader Editor

I’m trying to make good on my vow to delve into the world of shaders and at least specifically GLSL ((Open) Graphics Library Shading Language), so I’ve been spending some time enriching myself with the book of shaders. It’s quite nice but there’s a lot of time spent trying to utilize my small dry brain to re-understand maths (very simple functions, and some computer versions I’ve never heard of) to a satisfactory level. You know, it’s always annoyed me when a teacher or a curriculum will (probably for good reason) decide that there is no need for you to understand or at least know the origins of something, it ignites some kind of obsessive annoyance inside me. You use the Sin function on a calculator more than the damn equals button by the time you’re 16, and yet I don’t recall ever being given even a hint as to how it works. I appreciate this is science ratified by great big brains before you, but my God.

Functions

Before I push out a paragraph on-topic, I want to note what I gleaned about how a calculator does its magic when trigonometry buttons are pressed. According to the article I just linked, once the guy gets through his Taylor series approximation spiel, it appears that they will contain a specialized chip which has ROM/registers that hold instructions for iterative “CORDIC” algorithm, which is an efficient method for getting (accurate) approximations of things like square roots or trigonometric functions. Apparently this computation is done with complex numbers.

Mattox

While I was reading through the first few sections of the shader book, I was compelled to grab an offline version of it along with some of the other useful web apps it referenced. After cloning the github repositories of them I encountered PHP. I don’t know much about PHP, I’ve heard of it and know it’s web related, but it definitely allows you to host these web apps locally for offline access which is great. I also had some issues with an ini file, but whatever. I’m really stretching this topic thin, because I honestly haven’t gotten very far.

Like chefs that collect spices and exotic ingredients, digital artists and creative coders have a particular love of working on their own shaping functions.

I enjoyed the notion of creating and saving prefab pieces for later use conveyed in the book through the quote above, it’s something I’m trying to consider when working on things, so I can get more out of them. Without straying too far from my love of autistic high time investment bespoke work and hatred of preset pipeline committee churning. There’s a lot of other cool tid bits.

Media Streams // Golan Levin

Golan Levin is cited in the book of shaders, as a source of function knowledge. He’s got a nice set of them, but I enjoyed having a look at his diverse portfolio so much that I signed up to his mailing list. He’s some kind of computer artist of an academic kind. One of the interesting projects I found wasn’t listed on his main page, but on the more extensive ‘projects’ page, and goes way back to 1994. The project is called “Media Streams” and it pertains to creating a visual language to describe media for purposes of reference and archival. It all sounds a bit convoluted and unintuitive once you check into it further, but it’s an interesting concept that Mr. Levin contributed many icons toward. He has many of them displayed in a contact sheet format on his own site, and a broken link to further details.

Media Stream

If you’re a Power User you’ll not be impeded in your data trawling by a broken hyperlink. There’s some further info on the project provided on an antiquated webpage with a very nice .edu domain. Yeah, I’m just reading up on some projects going on over at MIT, and you? Let me break out a quote.

In the future, annotation— the description of the structure of media content— will be fully integrated into the production, archiving, retrieval, and reuse of media data. However, there will remain many annotations which computers won’t be able to automatically encode. A central challenge for computational media technology is to develop a language of description which both humans and computers can read and write and which will enable the integrated annotation, creation, and reuse of media data.

Year 2000? The future.

Apparently the project was featured in “Readings in Human-Computer Interaction: Toward the Year 2000”, a publication from 1995, via an article written by one “Marc Davis”. I presume Marc is the project lead, and you can see the full article in PDF form here, and divine that it is in fact a full fledged “paper”. This provides much needed detail: the system they worked on was to annotate video content iconographic-ally be it shot to shot or in broader envelopes of video stretching across scenes. The Icons themselves I would compare to content advisory on physical media, you know, like what PEGI does (a fist to symbolize violence, a needle for drugs).

PEGI EIGHTEEN

It really is quite interesting. It’s like they dilute sequences and shots down to primitive entities present and actions occurring and end up with a “script” composed of little representative icons. From this it seems there’s so much nuance missing that it could be quite enjoyable in its own right to see someone attempt to infer and interpret to recreate the source. I wonder if they could do anything cool if they powered this concept with modern machine learning or detection, which can quite easily detect patterns and locate objects.

In studying the semantic and syntactic properties of video we have developed both a representation and an interface which enable content-based annotation, retrieval, and re-purposing. In the summer of 1994, Media Streams was subjected to an 8 person, 3-day user test that yielded promising results: we found that the system is learn-able; that users reuse each other’s annotation effort; and that, unlike keyword-based systems, different users’ descriptions of the same footage are semantically convergent (Davis 1995).

Interesting that the conclusion kind of supports that oft refereed to idea that image, icons/emojis or glyphs can effectively transcend lexical differences and the coalescing error of interpretation differences wrapped within and without. Or not, I don’t really know. I’m going to see if I can’t throw in some nearest-neighbour blow-ups of a select few of Mr. Levin’s icons using only my viewer of choice, Nomacs ImageLounge.

Eye

Four

Nine

Coundoom

I love these things. Anyway, I was going to talk about Esperanto, but I’ve spent long enough on this post. See-ya.