This is a story about getting side-tracked by a spur of the moment idea, spending many, many, many hours working on it, and immediately realizing that you can't really do anything with it because you operate in a world of compressed media.
With that in mind, pretty much every image linked in this is going to look odd in ways that prove the problem.
Friday night I'm enjoying my first real night off in ages. For the first time since May I'm going into a weekend without some huge interruption looming immediately over the horizon, which means video games and YouTube, and sitting there in my recommended queue is a fresh video from Clint at Lazy Game Reviews unboxing a new-old-stock 1980s Amdek amber monochrome monitor.
As it's playing two thoughts cross my mind
1) given that many old monochrome computer displays used RCA composite input it should, in theory, be reasonably easy to feed a modern signal to an old amber or green phosphor monitor.
2) the aesthetic of said monitors shouldn't be terribly hard to replicate in editing. Heck, I bet I could make it into a look for livestreams!
Now, I don't have the stuff on hand to test 1 and it would take me some time to source a suitable monochrome monitor, but I sure as heck have the stuff for testing 2!
Making It Happen
So, fire up Resolve and get to work.
Conceptually the best way to do this would be to first create a suitable monochrome picture with the proper banding, and then tint that image to whatever colour it is that we want it to be (in this case amber).
So, that's where we start.
The three nodes first convert the image to monochrome using the RGB mixer and third (pictured - labeled 02) emulates six-shade colour space by just creating hard stair-stepping in the YRGB curve. The middle node (03) adjusts gamma and contrast to tweak the image being fed into the crusher that is node 02.
The second stage involves creating the amber overlay, drawing a second line from the source to a series of nodes that first create an all-amber image and second apply some more granular tweaks to the red shift in the shadows before feeding into a Layer Mixer node set to Color Composite Mode.
At this point I'm reasonably happy with the tone and banding, so now we just need to get it working with OBS. To turn it into a LUT (look up table) that OBS can use for streaming we just need to export the .cube format LUT from Resolve and apply it to the Hald16 type LUT that OBS uses.
Hald LUTs are really simple, just a .png image. The two blocks below are what the neutral Hald16 LUT looks like on the right, and after applying my .cube LUT on the left.
Brought into OBS and applied it's not quite there. The LUT itself looks fine and is behaving the way it should, but the image still fundamentally looks like a 1080p video source with an aggressive filter applied, not like authentic Hercules graphics, which had a maximum resolution of 720x348.
Now, we're not looking for pure authenticity, just a general feel, so we're not going to match that resolution, but just aim for the ballpark using numbers that are convenient for our conversions.
To force down-res the video in OBS we create a new scene where we place the camera source, add the LUT, then transform the source to 1/16th it's original size (divide both length and width by 4). That scene is then nested into another new scene where the nested scene is scaled back up to its original size (take scene resolution and multiplay length and width by 4). Setting Scale Filtering to 'Point' gets us some nice chunky pixels with hard edges, which is getting it much closer to looking and feeling right.
Okay, rad, but it's still not quite there yet.
You know what this needs?
S C A N L I N E S
Well, scanlines are easy to fake. We already know that our resolution is down-then-up by a factor of 4, so every visual pixel is made up of 16 real pixels, a 4x4 block. So over in Photoshop we make a new document with the same resolution as our OBS canvas, draw a little shape in one corner so that we've got a 3x3 empty block with two black edges, select it, turn it into a pattern, and then flood-fill the entire document. Save that as a transparent .png, drop it into OBS as an image overlay, and it looks rad!
There's just one problem.
It only looks right at very exact resolutions.
All those fine gridlines are subject to hella artefacting when viewed at any on-screen resolution that isn't an even-number factor of the original(1/8, 1/4, 1/2, 2x, 4x,...) so unless you're clicking through to view all of these images at 1:1 there's a good chance that all of these scanline images look abysmal and weird. What this means for Twitch and YouTube broadcasting is that it would either look correct or really, really weird depending on the mode or device that the viewer is watching on.
And that's before we even need to deal with video compression.
The x264 encoders used by Twitch and YouTube hate fine lines, and because of the pervasive black lines, nearest-neighbour value sharing causes all the bright amber shades to be washed out and dulled.
And that's just the compression on the signal that's sent to Twitch, so who knows how garbled it would look once all was said and done by the time Twitch adds their encoding.
So all of that, a lot of work and a lot of steps, refining a really cool look that's ultimately incompatible with how the delivery infrastructure works.
Oh, and all the same artefacting problems apply if you only use horizontal scanlines, just only in one direction, so instead of getting cross-hatching and plaid patterns you get phase waves: