This is a personal blog, please observe Think Hour for ad hoc quasi-diary writings and Big Ideas for my ratified longform items. I will see you there.


2020-09-04


Fourth post straight, show me how to keep it going. You gotta be.

Obit Obsessive // Ronnie McNutt

I watched a guy named Ronnie McNutt die last night. Over a live-stream he made his ultimate commitment at the end of the month just past, and I saw a recording of it. I’m very fascinated and a little obsessed with online obituaries, all the old and now defunct geocities pages dedicated to people who have passed on, replete with guestbooks and the like, give a lot of gravity to what you might otherwise only be aware of as small inqualitative component of a huge statistic or abstraction. People die, and we all will one day, and we are good at reconciling the unknown and focusing on all the tasks, obstacles and passions we have been ordained with.

Ronnie

Without expounding on the general interest death rouses, I want to make brief note about Ronnie. His death footage has been doing the rounds, and while I’ve seen plenty of gore and death online, his debacle was one of the more chilling I’ve ever seen. Definitely not in sheer body horror or grotesque detail terms, but more about the nature of the event (his phone ringing off the hook while he talks, and the dripping sounds especially). What of the man he was?

Ronnie with some babe

At first glance he looks like a run of the mill Soylent slurping liberal “nu-male”, but we’re dealing with a military veteran here, and of course he’s a gun owner. He seems to have been employed in various production plants in Mississippi such as an Under Armour facility and a Toyota plant. I’m not sure in what capacity he was employed, he muses about his management abilities in the tape itself and seems to have experience running conventions, but he looks like he could do anything from security to human resources roles. These places of employment are something you always hear about during U.S. election years, these “blue collar” workers and gross counts of “jobs” lost or gained, usually via large facilities like this. Given that his recent firing allegedly played a hand in what he did, it lends greater emotional weight to what otherwise felt like some simple rhetorical tools. That or the indirect effects of covid-19.

Ronnie at Disneyland

So I went to Disneyland for the first time in my 32 years today. It was amazing. I got to ride several iconic rides and a few new ones. I ate a Monte Cristo sandwich at the legendary Blue Bayou restaurant. I got to visit Galaxy’s Edge, and grab a drink at the Cantina. After all I did today, I can say that Walt’s dream for this park is still alive. It really is the happiest place on Earth!

R.I.P (He is survived in part by nephew “Chance Pounds”, what a cool name).

Facial Future // Motion Portrait

I’ve been noticing a lot GAN (or other machine learning techniques) machinations proliferating my media feeds lately. I don’t remember exactly when the whole “deepfake” thing blew up, but it feels like a while ago. Yet it seems like we are entering a golden age of uses of these things. I’m now recalling FaceApp which possibly uses similar technique and not long ago I was browsing for tools on itch.io and elsewhere, noticing tools that make (apparently very effective) use of these techniques, or at least data/algorithms derived from these techniques, I suppose. They also seem to be used in Snapchat/Instagram filters.

Mona Lisa says her piece.

I ended up at, and subsequently bookmarked, the website “motion portrait” after trying to find the origin of the recent 2D-image-mapped-to-facial-animation thing. I don’t think this is the actual source, the real things are quite impressive, the clips have some sense of depth given to the photo by seemingly automatically deducing where the face falls back in an image, and mapping this similarly to pre-recorded or live user face, animating and extrapolating visuals data not in the original image as they speak/move. The effect is obviously limited so that the extrapolation remains within realistic bounds, and as a result they don’t look especially realistic. They do look very neat though, almost identical to this demonstration from Samsung last year. Whatever the case, the point here is to look at this other website I found, for the sake of nothing but curiosity.

Points to Points

Uninterestingly (as crazy as that might be to say) they seem to offer many of the already discussed and well known solutions as it pertains to digital fun with the human face. Facial analytic used to create aged portraits, virtual makeovers, generation of stylized characters based on face as well as motion tracking/mapping of the face into 3 dimensions for things like animated avatars and face replacing. They’re a Japanese company, and this is reflected in the majority-japanese partners in their showcase which shows off the many promotional campaigns, apps and tools they’ve worked on or had technology used in. I won’t lie, in my head I had the idea of this whole abstract being some kind of hyper focused industry or contained research project at US firms or colleges. The reality is this endeavour and application of neural network technology has global appeal and is rooted and culminated from a broad range of work on things like image analysis techniques through to simple mathematics. This Japanese company has been working on commercially viable projects since at least 2008.

what

Looking at their overview page to find key players I’m a little stuck. It might be an auto-translate error, but I get different names on the auto-translated Japanese site and the official English sites. The CEO listed on the former leads to Takehiko Terada who seems to definitely be involved at least in a business sense (though it says a company called “ax inc.” actually has 100% stake), the latter (Kenji Terada) to two people: a video game/anime person and a researcher with several very interesting, and very much related, articles attached to his name. So while I can’t say for sure that this final individual is involved, I have to explore some of these. It fits, it’s all coming together.

Obama Smiles

Look at the latest paper listed. Surveillance of pedestrians via closed circuit camera is some CCP style research. They love it. Well, the technology isn’t inherently evil or malicious of course, and the stated application is altruistic:

By detecting pedestrians, we can control the time of green light for the handicapped and the aged. First, measurement areas are defined along white line of crosswalk. Next, make space-time images that a slip of image on measurement area is arranged. Finally, pedestrians are detected by processing the image.

Laser Throat

Let’s go further back. Let’s lend my waxing about deep rooted long culmination of this tech some credence. To do this we can visit a time before I was born. The year was 1993. He contributed to an article/paper on automatic identification of human faces.

In this paper, the authors describe a method for the identification of human faces. In this method, the fiber grating vision sensor which has been developed by the authors is employed for the three dimensional shape of the faces. Before the identification of the face using the three dimensional shape of the face, it is necessary to calibrate the position and direction of the facial data. In this method, a set of the directions of normal vectors at data points in the facial surface is obtained, and calibrations are carried out in accordance with the extend of errors in the sets.

Laser Splits

That sounds seriously similar to some of things we have explored tonight. Pretty interesting. Before I finish thinking, what the hell is a “fiber grating vision sensor”? Well if some of the more readable images google serves up when searching are accurate, it simply appears to be a grating that will split a laser into an array, which is then read by a standard optical camera. I imagine this would make analysis of depth and the like much more easy, something for the system to latch onto with certainty. Now that machines have the capacity to learn and compile data along specific lines all by themselves as explored above, I imagine solutions that would previously require such a sensor will become all but outdated, at least if accuracy is not an object. When was the last time a camera couldn’t detect a face, for instance? Well in this case I guess we’re really talking about positioning and orientation, something much more complex, but come on. Okay, I need to stop my layman stream of probably incorrect consciousness right here.