Production Expert

View Original

Content Security - If You Work In Film And TV Post Read This Now

Over the years the security requirements for post houses around the world have increased to a point where most small and even some medium-sized facilities find that the hoops they have to jump through are too high and too costly. But then COVID happened and suddenly it seemed like all of the security requirements went by the wayside in the scramble to get people working from home. In this article, Reid Caulfield, founder of Central Post L.A. and Westmount Digital in Los Angeles explains the history and the background of the changes to the content and data security from the perspective of being a Hollywood based facility. Prepare to be surprised at what Reid uncovers.

A Brief History

To start with to put all of this in some kind of perspective, here is a brief history of creative content and asset security in Hollywood:

  1. For the first 100 years, there was very little thought given to asset security, meaning that basically, there was none.

  2. Then around 2012-2014, large studios & broadcasters started to put some security measures in place.

  3. Then around 2017, there was a Netflix content hack, followed by a mad, expensive dash to lock everything down.

  4. Finally, in 2020, COVID-19 hit and all of a sudden no one cared about asset security. Then they did. Now, the pendulum is starting to swing back to somewhere in the middle.

A brief note: nothing I talk about in this article is proprietary to any particular company, network, broadcaster or “streaming service, and all of the policies that I refer to are freely available on the Internet or with a Google search. I’m also aware that some of the intended Pro Tools Expert audience may be familiar with some of what I share in this article because you have probably worked on shows that called for some or all of these measures.

Preamble: Apologies In Advance From Hollywood

I’m going to tell a true story here at the beginning of this article, and my plan is that this story will serve as the framework for the rest of what follows. And while I will try to be as brief as possible in the telling of this story, I cannot promise the same of what follows it. It’s long. And intricate and maybe a little too ‘inside baseball’ for some people in the world. But before I tell the story, I want to acknowledge a turn of phrase that I will use fairly frequently:

See this content in the original post

Although Pro Tools Expert is based in the United Kingdom, over 50% of the community are based in the US and Canada (where I am originally from). In my objective opinion, the post-production work coming out of London, and Toronto, and Munich and New York and Manchester and Chicago and South Africa and Sydney - is above top-notch. We in the U.S., learn from you at every turn. Maybe, if we’re lucky, you learn a little bit from us. Maybe. Hollywood is not the holy grail of post-production. It may have been once, but not any longer, not for decades, actually.

The recording and post-production community in London is not entirely unknown to me (long story), although I have never worked in that industry anywhere but Canada and the U.S. I have always been quite in awe of how our industry has always flourished in London. Many years ago on an exploratory mission of sorts, I was introduced to the post community in Soho, and from that moment on, for years, all I wanted to do was to work in London, in Soho, on television and film sound.

So when I say, in this article, “What we do in Hollywood is…”, please understand that I am not saying that Hollywood, or the people or facilities there, are better than anywhere else or are to be mimicked, or that what we do sounds better. I am simply trying to frame the sometimes very odd inner workings of this physical place (Los Angeles, Hollywood, etc) and the conventions that are commonplace here. Here’s an example:

I started mixing for a large broadcaster in Canada (Montreal) in 1984.  16 track and 24 track analogue, and it was called ‘Sweetening’ back then I edited all the sound, then I mixed all the sound and then I laid all the sound back to 1” videotape. All by myself. When I got to Los Angeles in 1991, imagine my surprise when I came to learn that all of these things were separate: dialog editorial, sound effects editorial, music editorial, then mixing for each of those was actually a separate job. The reason is twofold: the unions, and, well, “That’s how it’s done in Hollywood.”

So, in conclusion, when I use that phrase, I am not talking down to anyone. I’m only saying that “this is how this particular thing is done in this particular place on the map.” Please hear me when I say “It comes from that particular point of view”.

A True Story To Put Everything Into Context

Our facility - Central Post L.A. -  opened just over three years ago in August 2017 and very quickly we entered into post-production contracts with various content producers in town. Two years ago, one of those content producers started shooting a series for one of the biggest ‘Streamers’ in the world. It was a half-hour comedy show and it had been on our radar for a year by the time it actually landed at our facility, in October 2018.

Unusually, our facility had been granted permission by the streaming company to fulfil the entire post-shoot chain, everything from digital dailies, backups and backup distribution, picture & sound editorial, through to 4k color finishing, 5.1 sound mixing and IMF packaging. The production even edited the pictures at our facility, as at the time, we were also in the business of renting out picture edit suites and equipment to various productions (Premiere Pro, Avid Media Composer or Apple Final Cut Pro).

The show was what we call a ‘single-camera comedy’- in other words, no audience was present. In Hollywood, shows that shoot with an audience (i.e. something like “Friends”) is known as a ‘three-camera’ show. Never mind that there were actually three cameras rolling on our show, it was referred to as a ‘single-camera’ comedy.

For six months we planned the pipeline with the production and the streaming company. It was to be a 4k shoot (Alexa or Red, I can’t remember), shot on a set about seven miles from our facility. Our storage and network infrastructure were obviously in place and our facility was fully secure, a condition of the streaming company that had commissioned the show. What we had no control over, obviously, was anything that occurred on-set. Things like data ingest,exgest and backups. We had control of everything that happened to the data from the moment the drives were handed off to us. Now, planning for a 4k pipeline is obviously fairly complex from a storage, network and security infrastructure standpoint, but everything was in place on our end.

At the end of each shoot day, we were to receive the three cameras worth of data from that day, upon receipt of which, at our facility, we would:

  • Copy the data and send out the offsite drives, if they hadn’t already done so from the set (they hadn’t);

  • Then, all of the day’s data needed to be copied to our public-facing server system. This system has its own RAID 5 drive array;

  • Once the data has been ingested onto this sequestered drive array, the material was scanned for viruses;

  • Finally, the data is ‘pulled’ from the sequestered system onto our main content server, then the original transport drives returned to the set for the next day’s use.

  • Finally, HD proxies would be made off of the original footage for the editors to work with. That part was automated.

  • Over the next couple of hours, in the middle of the night, our ‘Nearline’ backup server would kick in and start backing up all the new data - 8k originals, 4k and HD proxies.

  • Our DIT would come in very early in the morning and begin preparing the digital dailies for the producers to start reviewing.

For the purposes of this article, I’ll stop there. All of these machinations had been accounted for. 10Gb network segments for some of the pipeline, 1Gb segments for others. All server content pools had already been set up and ‘air-gapped’ from our facility’s operating storage pool, as per asset security guidelines from the Streamer mothership. I mean, we had an ongoing sound facility to run off the same network and server, for goodness sake. Everything had been worked out.

See this content in the original post

I don’t know who made this decision. A pox on them, whoever it was. Obviously, all it took was a calculator to realize that our entire pipeline needed to be re-thought, which is what we did. Allocated storage pools, network segment speeds; everything needed to be re-calculated. In five days. “Well”, we thought; “It really comes down to how much footage they’re shooting on any given day.” 

The answer was 3-6TB per day. 

Complicating things even further, the production was not using a DIT on-set who had ever worked with an 8k ingest pipeline. All of a sudden, data operations that were contracted to take place on-set were pushed down the line to our facility. Backup copies, offsite copies - the original raw drives were sent to our facility, and all of those operations happened from there. They paid us more for the services, but that wasn’t the issue. 

The issue was physics. 

Now, it’s worth remembering at this point that all of this is security-related. I wouldn’t be darkening your doorstep with it if it weren’t. There are two notions of content security that we here in Hollywood need to be mindful of:

  1. TPN (Trusted Partner Network). This is a very large set of guidelines for physical and data security.

  2. Whatever the people who are paying for the show say (e.g. the network or streaming company). This is also a large complex document.

In the U.S. and Canada, there’s a big electronics chain store called “Best Buy.” I love Best Buy. Seriously, you can’t keep me out of that place. I guess the closest thing in Britain would be Curry’s? Now, call me a snob, but the one thing that I do not rely on Best Buy for is professional-grade content storage. My Pro Tools rigs do not have drives that came from “Best Buy” hanging off of them.

To be fair, the production also had not planned for an 8k shoot, bless them, and so all of a sudden, they were literally buying every hard drive at every “Best Buy” within a 30-mile radius of the soundstage. Consumer drives. Was the camera data being ingested straight to these drives, or were the consumer drives just acting as on-set copy mechanisms? Were these drives encrypted? Are you kidding? Remember those Western Digital consumer drives that looked like books? I think they were called “MyBooks” or something. It was like that, if not that exactly. That’s what started showing up at our facility. Oh, there were also some proper G-Raids, but they only had enough of those to accommodate a 4k shoot on a daily basis and they didn’t want to spend that money again to account for twice the data coming off the set.

And that’s cool. But these were originals showing up on our doorstep, we would later come to find out. They would shoot onto a proper on-set drive array (an array whose storage had been originally planned for a 4k shoot), then offload to these consumer drives for transport to us, and then write over the original drive array for the next day’s shoot.

At some point, everything comes down to physics. How fast can a piece of data travel - be copied - from one place to another? How long does it take for the data from the set to be transferred off of those consumer drives and onto our sequestered system, then checksummed for copy accuracy and then checked for viruses before being copied to offsite drives, and once those drives were checksum-verified, finally be copied to our main content server? It’s simple math. How fast is the data coming off of one drive and how fast is it travelling through the network to be copied onto our main content server? How long do the checksum and virus verifications take? How long do proxies take to create? I was on hand the first night the drives arrived from the set (and all of the first week, in the middle of the night), and the answer is:

Nothing was fast enough for us to turn around the production drives in seven hours. That’s the answer. We figured it out but it took a couple of days of actual ingesting to get the workflow & pipeline down to a science.

Wrapping up my true story, I will jump to the end of the production. All episodes of the show have been graded and mixed and are out the door (we also did the IMF packaging for the streamer). So let’s recap:

  • The show was shot in 8k, and was to be delivered as 4k

  • HD and 4k proxies were made from the 8k footage

  • Original 8k footage copied to three different drive systems for safekeeping.

  • The show was edited in HD, then the 4k footage conformed to match it.

  • The Show was graded in 4k.

  • 4k episodes were delivered to the network

  • THEN THE ORIGINAL 8K FOOTAGE HAD TO BE CONFORMED AND GRADED TO MATCH THE HD/4k EDIT. MONTHS AFTER THE SHOW HAD AIRED.

So, for a year, we had 8k, 4k and HD raw and edited footage clogging up our server and being copied to our backup server. At one point, our sound operations - 4 rooms worth - had to make do with 4TB total storage on the main server. End of story.

Asset Security Affects Everything

Asset security affects everything in your post-production facility and everything in the production chain outside your facility. In fact, it affects everything at every step of the process from production onwards. The security requirements that have been put into place by the major studios and networks are still bound by the laws of physics - how fast data can travel on your network or from one device to another. On-set capture - DIT -  data playback and verification, then on-set editorial and color - these are all now required by many broadcasters (and streamers). So-called on-set “video villages” are getting very crowded, and very hot because of all the gear, which means large fans blowing to keep people cool, which means noise, etc. Three cameras worth of data, all being recorded directly onto at least one or two drive arrays, each camera’s footage loaded onto two travelling drives, each going in a different direction...

Once the data is ready to travel to whichever facility will be processing the digital dailies, proxies and backups, ideally there need to be two distinct paths: one copy (drive) goes to the facility, one “somewhere else” - probably the show’s production office - and one needs to stay on-set. That’s three full copies. That’s what’s supposed to happen. Reference my story at the beginning of this article for a description of what almost always actually happens.

Working From Home In Audio Post Production Isn’t A New Idea

I’ve been editing & mixing sound from home on a daily basis since 1997. But as early as 1994, a colleague I had worked with for several years changed jobs and went to work as an audio editor for one of the biggest film & television studios in Hollywood (and the world). A year later, in 1995, he told me that, unofficially, the large studio required that all of their sound editors & re-recording mixers, be able to work from home to some degree. (Note: the same is still true at all major Hollywood studios. Prior to COVID-19, part of the reason for this is so that the studio’s union sound people can work offsite on non-union jobs or for smaller footprint work, i.e. independent films, documentaries, etc.).

Working, on sound, from home has been possible for around 25 years. So technically, this has been possible for almost three decades. A full quarter-century. COVID-19 hit everyone in the world early in 2020 but for sound people, the concept of working sound from home was nothing new.

However, even though we’ve been able to edit and mix from home for decades, for the last several years - at least in Los Angeles - there has been a new wrinkle for the work-from-home crowd: the ever-expanding issue of asset security has been diminishing the amount of work that can be done in a non-secure, unmonitored environment, especially when it comes to the large networks & OTT content distributors. Content security, in other words, was responsible for actually pulling back the amount of work that could be done from home as opposed to in a fully secure facility.

The First Shot Off The Bow: Content Servers vs ‘Sneakernet’

Serious creative asset/content security has been a thing – at least in Hollywood - from around 2010-2011, maybe a bit longer, but it was 2014 that one of my clients - a major premium cable network here in the U.S. began imposing more stringent security regulations on their sound and picture vendors. At the time, their primary concern was that there only be a single copy of any of their assets in-house at any given vendor’s shop at any given time. In other words, the asset(s) would be delivered to the vendor on a hard drive or downloaded by the facility via Aspera, then loaded onto the vendor’s content server, thereby allowing for only a single instance of the asset to be made available to any number of Pro Tools or Media Composer (or whatever) systems at the same time.

That was the network’s first rule as it applied to content security. Of course, it would be impossible for any client to effectively audit this, but so what? If our facility makes a promise to a client, we work as hard as we need to in order to live up to that promise. But this requirement was especially interesting because here was a new barrier to entry for vendors wanting to work for this particular network; from that point forward, any vendor facility needed an actual content server and its associated infrastructure. No more ‘sneakernetting’ hard drives back & forth from room to room throughout the facility.

Up until this point - the old model of ‘receive the master asset from the client and ingest those assets to as many individual DAW or NLE systems in as many rooms as would be working on the show’ – was still in full effect, particularly in small and medium-sized facilities, and would actually remain in effect for years to come. In fact, it still happens today at some small and medium-sized post facilities, and it’s very much the norm most of the time in home-based facilities, but usually, those types of rooms are owner-operated and there’s probably only one DAW system and just one operator, anyway.

Nevertheless, let history record that we now had the first (of many to come) asset-security-barrier-to-entry mechanisms: if you or your facility wanted to work as a content vendor for this particular network, you had to have a server, all of the associated network infrastructure, and a stable (and constant) backup subsystem. And, to top it all off, the required (and expensive) IT personnel to keep it all running.

2016: ‘Sneakernet’ Is No Longer Viable

In 2016, I was mixing a documentary for a high profile producer at a facility in Los Angeles, and the ‘Sneakernet’ method of asset distribution was still that facility’s primary working methodology. If I had to change rooms - usually because of a scheduling issue (which happened a lot at this particular facility) - I would need to copy the material from the Pro Tools Mac assigned to that particular room to a portable hard drive, then carry that drive from one room to the other. However, these transport drives were not of a class or quality that they could serve as the primary content work drive. So, moving from one room to another, I would have to:

  • Copy the Pro Tools project plus the 90 minute ProRes 422 picture file - from the internal (or otherwise attached) drive from Room ‘A’, via Firewire 800, to an external drive;

  • Then I’d run across the complex to Room ‘B’, hook up the external Firewire 800 drive to that room’s Mac;

  • Copy the Pro Tools project plus the 90 minute ProRes 422 picture file to that system’s attached content drives;

  • and finally get back to actual work, like two hours later or whatever it was.

Then when I had to go back to the first room (say, the next day), I’d have to do some of it again. If I was lucky, I’d only need to return to the first room with a changed Pro Tools Session, or a session and a few new files, or “Save A Copy In...” new session and file set, etc. But then we have a versions issue: different versions of the project strewn across various drives in various rooms in the facility. It was a mess.

This facility did have a connected internal network and a really old server running it (a 2004 Apple Xserve, remember those?), but it wasn’t up to the job of serving or streaming hundreds of audio tracks and even proxy HD video content to multiple rooms at the same time. As well, their network was all 1Gb, which was common in 2016 in the media facilities business. The 1Gb Apple XServe network was for backing up files from all audio rooms in the middle of the night, but even that scheme ended up going wrong a couple of years later (very long story, though one not unrelated to asset security, in fact, or to the very documentary I had worked on there, but that’s a different story); but here’s the point: 

I hadn’t even been ‘Sneakernetting’ in my home studio for more than a decade at that point (2016). It’s inefficient, and it leaves more than one copy of every delivered asset spread across many different Pro Tools or NLE systems and their attached (or floating) hard drive subsystems. This is exactly the scenario that our premium cable network client had decided was unacceptable in 2014.

Now to be fair, this facility I was working at did have a lot of freelance sound editors & mixers in and out of their various studios at all hours of the day and night. They were ‘four-walling’ various edit & mix rooms, as well, meaning there were literally strangers roaming around the halls. None of them had access to the secured machine room, so that was good, though in truth it was far too easy to gain access to their machine room if one really wanted or needed to.

Central Post LA Machine Room Under Construction - We can’t show you the completed machine room because that would violate the security guidelines.

We’ll get to all of this in more detail later in this article, but, in brief, in a fully secure facility, each of those freelancers or freelance teams would need to have:

  •  Their own partition on the content server;

  • (Possibly) their own sound effects libraries;

  • They would each need specifically-programmed access card keys allowing them entry to some rooms in the facility but not others (and a computer logging system to keep track of all the comings & goings);

  • Each individual would require separately maintained Active Directory accounts, dictating what each person, specifically, was allowed to do on various company computers (again, much more on this later).

And this is to say nothing of our actual employees, who each needed similar attention paid to their access (as well as full background checks). So the IT burden of a fully secure multi-room sound facility is fairly large, which makes it very expensive. Again, more on this later.

 The Beginnings Of Audio Networking

In theory, working off of servers shouldn’t be such a big deal. I first installed and worked with networked audio on a daily basis in 1994. Not off a server, mind you, but with all individual production systems at least tied to a common network. It was a system by Sonic Solutions called MediaNet and it was basically a 100Mbps fibre or copper network card in each production computer that was running the Sonic Solutions DAW.

It was amazing and it was a mess. Data collision was a common issue and could take down all of the multiple connected systems at the same time. Large scale media servers didn’t yet exist for our particular business yet, so MediaNet was a system where each individual Mac-based Sonic System had its own SCSI hard drives hanging off the back of its local computer - and off of the network card itself - and the MediaNet software allowed each user on the network to access any other user’s locally attached hard drives. But when Sonic’s MediaNet worked, it was a game-changer. Again, this was in 1994. So imagine my surprise when, in 2016, mixing that documentary at a large post house in Los Angeles, I discovered that they weren’t working off of a server but instead were relying on “SneakerNet” to move jobs around the facility. I made do and got the job done.

The 2016 Netflix Hack

This is what changed everything, security-wise, in Los Angeles. Creative assets - files, “essence”, whatever you want to call it - had been hacked prior to 2016, and it was not even the first time for a Netflix show to have been stolen from a sound facility, but in late 2016, a Netflix sound vendor in Hollywood was hacked and various assets of a show called “Orange Is The New Black” were stolen by hackers from the sound vendor via the Internet, apparently by way of an unsecured Windows 7 or Windows 10 server. They were at the point where they were doing Foley for the series, as I understood it at the time, so the picture was final and locked (oh, for a return to the days of actual ‘picture lock’).

At the time, in 2016, there were no known methodologies for dealing with content theft, so no one at the hacked facility knew who to call. The local police? The FBI? So, the vendor panicked and unplugged their content network from their Internet connection and actually tried to make a deal with the thieves.

This backfired somehow - there was some miscommunication between the parties, and/or the requested ransom was not transferred in the time allotted, or the amount was wrong, or someone couldn’t figure out how Bitcoin worked - whatever it was, the thieves released the content out to the Internet and disappeared into the ether. I don’t know if they were ever caught and I think it may have ended up being an “inside job”, but it was only then that the vendor notified Netflix of the breach.

In fact, at the time, very few facilities were properly ‘isolated’ from the wider Internet. Most of their big clients (broadcasters, producers, etc) pulled their work from the facility immediately following the hack and so that sound vendor - a mainstay in the Los Angeles post-production community for decades - closed for good about a year and a half later. It was a very sad moment for our sector of the business.

The Security Breach Fallout

Everyone in the facilities business panicked, and all of a sudden, a whole new line of business appeared. On the one hand were various compliance auditors overseen by the “Content Delivery & Security Association”, or “CDSA”, which later became “TPN”, or “Trusted Partner Network” as the latest incarnation of this market segment. CDSA had been for years by the time of the hack in 2016, but everything exploded at the beginning of 2017 as asset security consultants and media-centric IT services companies began to spring up everywhere as a response. Camera installation companies were booked months in advance. Plus, if you or your facility is working for a client which demands TPN security compliance, then once a year, your facility must undergo a security audit. This is a very large questionnaire, followed by an expensive, in-person audit of your entire facility and its policies.

The Basic Asset Security List

These security considerations didn’t just appear out of thin air. Prior to the institutionalization and formalization of these security standards for facilities to abide by, these measures were known as “Disney Tier One.” Here’s a short “greatest hits” list of what they look at:

  • Physical security (locks, cameras, alarms, card-access entry on all production rooms - no freelancers allowed in certain rooms (or at all);

  • Cameras covering every inch of the machine room and, indeed, almost all of your facility, bathrooms excluded, of course;

  • Cameras in some - if not all - of your production rooms (that’s a weird story on its own in California), 

  • A DVR locked away somewhere (usually a secure machine room) that can record & hold 3-6 months of all camera data (8-50 cameras worth of data, depending on the size of your facility);

  • An archival system for footage that is older than 4-6 months;

  • Tight control over who has access to your machine room;

  • Onsite and offsite tape (LTO) backups of all content.

  • IT security (more coming up later on this)

  • Full background checks for all personnel

Mainline content (upper) and Nearline RAID5 backup servers at Central Post L.A. in Los Angeles

Backup Policy: A Different Kind Of Security

See this content in the original post

It’s always shocking to me how many people/facilities actually think that RAID5 alone is a pertinent, viable backup strategy. It's not. It can take days to reconstitute your server in the event of a single drive in your array going bad. It’s happened to me once, so I know the hard way that it can take DAYS. And this was on a robust, serious server product with its own powerful, inbuilt computer. The entire unit (24 spinning drives) was working at high capacity for 3-4 days, sucking up so much server processing bandwidth that general operations noticeably slowed down.

Most networks and studios in Hollywood now require this kind of robust server backup system and policies be in place before they’ll add your facility to their vendors lists, even though this is not, strictly speaking, an actual security thing. It’s more of a financial security thing, and it’s for everyone’s benefit - both client and vendor - and especially important for our deadlines. Before any catastrophic storage server failure occurs, ask yourself these questions:

  1. Do you have a backup of the assets that were lost so you can get right back to work (i.e. a ‘Nearline’ server)? Or:

  2. Do you need to go back to your client and ask that they send over the assets again? (You really don’t want to have to do this.)

  3. If, in fact, you can get back to work fairly quickly because you do have the requisite backup systems & policies in place, how much work have you lost that will need to be redone? In other words, when was the last incremental backup performed? Multiply this number of hours by your number of rooms and you’ll have the number of man-hours it will take to get back on track.

  4. How robust are your server and your network? Is it robust enough that you can just have it backing up 24/7/365? Most such systems are not robust enough to do this, but this is more of a network bandwidth issue, in other words, total bandwidth divided by the number of active production rooms.

At our facility in Los Angeles - Central Post L.A. - which we built in 2017 (and so we were able to build the security infrastructure from the ground up), we have our main 96 TB RAID5 content server backing up four times a day, through a 10Gbs network backbone, to a ‘mirrored’ 96TB RAID5 server over our 10Gbs content network. All production rooms are on the 10Gb network segment as well.

When we built the facility, cloud backup was a fairly new concept for our industry and so was not really an available option, and anyway was not accounted for at all in the CDSA security guidelines. It may be allowed now under the revised ‘TPN’ guidelines, I’m not sure, but even if it is, that’s an awful lot of data constantly streaming out of your facility all the time, day and night, even if your facility has a full 1Gb (or more) fibre line in and out. In Hollywood and the surrounding areas, fibre is really expensive on a per-Gb basis. So, most small and medium-sized facilities only have 1Gb. The idea of clogging up that pipe with terabytes of constantly outward-streaming data is shocking to think about, especially when we have so much streaming in from clients at the same time. So, cloud-only based storage is too taxing and expensive on an ongoing basis. Local servers are currently more advantageous and less expensive.

Facility Backup Policy: How Much Data, Time & Money Are You Willing To Lose?

It’s easy for a network to dictate backup policy for their content. They need backups in more than one place. No one wants to lose a drive of original footage, only to find out it was the only original and now something needs to be re-shot. Seriously, if that happens,

See this content in the original post

Seriously. You won’t. Your facility will be toast. Wiped off the map. Go home. It’s over. 

With this in mind, it’s worth you looking at your backup policy this way: how many hours work are you willing to lose and have to redo in the event of a data catastrophe? Let’s even presume that you and your facility do not hold the only copy of expensively-shot material. Let’s presume there are backups & safeties, all exactly where they are supposed to be. Good. The network’s and producer’s concerns are taken care of.

Internally at your facility, however, serious thought still has to go on in this regard, for the sake of your business. If we back up our main server four times a day for four rooms - so, every six hours - and the worst happens, it means that we might have lost a total of 24 man-hours. Can your deadline afford three days of redos? Will your client leave if this happens? How often is enough for incremental server backup? Now, if you’re smart, you stagger backup routines across rooms, schedule them to happen during lunch/dinner breaks, obviously use the overnight hours as productively as possible for heavy server activity, etc - there are all sorts of clever tricks to lighten server and network loads during daylight working hours. The other option is to be constantly backing up to our nearline server, every minute of every day, but this puts a massive strain on network and server resources in a busy multi-room facility. Everything slows down. Which means that now, you’re paying people to wait around for the server to catch up, and you’re waiting for what seems like ages to “Save” a Pro Tools session, etc.

Long Term Archival Strategies

It might not seem obvious at first glance, but your facility archival strategy is also a security issue, and it is addressed in both the TPN guidelines and the guidelines of every network or streaming company in the U.S. How long do you keep client’s material on your server? At Central Post L.A., projects are kept on the main & backup (‘Nearline’) servers for three months (if possible), at which point the files are flagged for removal off of our main server. They stay on our backup server for another three months after that (again, if possible) while they are also archived to LTO tapes - onsite and offsite copies required. Once archived to LTO, files on the backup server are flagged to be available for removal if required. The backup server would then, when it needed the space, automatically delete those files. The reason it’s a security issue is that it is still the client’s data in our facility, in two places, and so we are obviously responsible for safeguarding it for as long as it is within our walls. This is a liability, obviously, but this needs to be balanced with the desire to have the data on hand should your client ask for it later on, or, hopefully, ask for more work to be performed on it. It’s always nice to be able to tell your client that you still have the required materials in-house so they don’t have to send it again.

More Barriers To Entry

But you may have noticed new barriers to entry as described above, which makes it almost impossible for small or home-based facilities - or even some medium-sized facilities - to adequately comply:

  • A dedicated, locked and 24-hour-a-day, monitored machine room;

  • Primary content server;

  • Mirrored Nearline server (or some other way of almost immediately being able to retrieve client data in the event of a catastrophic failure of the main server; I’m open to suggestions);

  • LTO drives, robots and tapes for deep archiving, onsite & offsite. But here’s the really fun part:

The Netflix hack had codified a commonsense but surprisingly complicated new rule:

See this content in the original post

It sounds obvious now, but a few years ago, not so much. How, then, now do we move content in and out of the facility via Aspera, or Hightail, or whatever without being able to download it straight to our content servers? The answer is that there must be intermediary, Internet-facing ‘I/O’ computers - Macs or PCs - running on their own network segment, in and out of your facility and totally separated from your content network so someone from the outside cannot hack through the maze of systems to grab your show assets.

Obviously, every computer must be password protected (we use 13 characters, minimum, with all sorts of other password rules: numbers, letters, upper and lower case as well as special characters must all be a part of everyone’s passwords. Files come in through that Internet-facing system - each equipped with its own, large capacity RAID5 hard drives - where they rest while they are scanned for viruses, and then “pulled” - not “pushed” - to your content network. Someone with the required systems authorization needs to go to a content server-connected production system and specifically ‘pull’ the newly-downloaded files off of the I/O computer’s storage subsystem and onto the main content storage network. Only certain people have the appropriate systems access and permissions to perform these duties. Freelancers, for example, do not have this access. This permission is granted only to full-time employees, and even then, not even to all full-time employees. Remember, all of this needs to be in a dedicated machine room, so now also a server room.

Just look at all of these barriers to entry! Multiple computers with multiple passwords on multiple networks, 10Gbs Cisco switches, serious cooling... Do you have any idea how much heat a single 10Gbs Cisco switch throws off in the course of a busy, 24-hour day? Because those switches are always working. If you were to look at my machine room switches in the middle of the night on any day of the week, when none of the rooms were occupied - say, at 3:00 AM - you’d see that those switches are passing data non-stop because of server-to-server backups, overnight downloads and content server transfers, etc.  And the noise these switches make because of the heavy-duty fans that kick in whenever you’re running above 50% of switch capacity! And if you’re running multiple production rooms, multiple machine room servers, IP-based phones, Dante, Eucon, speaker and power amplifier networks (i.e. Blu-Link), monitor controllers - you’ll need multiple switches. Or one really big switch. But we’re not done yet!

The author mixing in Dolby Atmos on an Avid S6

Then there are the production computers, these are the computers running your Pro Tools rigs. These cannot be connected directly to the Internet either. You can connect them to the Internet for doing software updates, etc, but there must be a system & policy in place so that, when you are connected to your in-house content server, it’s impossible to be connected to the Internet via wired or wireless connection. And if your production computer is connected to the Internet, it needs to be impossible for that same system to connect to your in-house content network simultaneously. And the person performing this action needs to be authorized to do so. So, the idea of your editors & mixers downloading and installing plugins is gone. Now, dedicated IT people need to do it. 

And by the way, the same now goes for any new content that needs to go into, say, your Pro Tools timeline. 

Client: “Hey, there’s a line of dialog missing in that scene.”

Me: “Let me check the original AAF. Nope. Not there.”

Client: “We can email it to you and you can just put it on the timeline.”

Me: “No, we can’t do that.” *starts to explain TPN, streamer and company security policy*

Client: “It’s just one line. Okay, how long will it take to do it the proper way?”

Me: “About a half-hour.”

Client: *head explodes* and goes to complain to management (not knowing that I’m a partner in the company and so I am management, actually).

This exact conversation and the conversation explaining loudness basics & principles to directors are the conversations I most have with them. 

IT Security

Then there’s the issue of IT security:

  • How your servers are connected to your production rooms and to the outside world;

  • How you handle email and other business network traffic inside your facility walls (i.e. database access - things like Microsoft and various other software office suites.

  • How your IP-based telephone traffic is handled

  • How your network is segmented (usually via vLANs on those very expensive, very hot 10G switches)

  • Software and hardware-based firewalls, and - here’s the really fun part:

  • Firewall rules that need to be updated every hour of every day, 24/7/365 - as a subscription service. This is extremely expensive for enterprise applications and requires an enormous amount of initial switch programming, on top of the already-monumental task of programming switch vLANs to handle all sorts of data that cannot co-mingle. You don’t want Eucon data getting mixed up with Dante data, and so on, because, basically, shit won’t work.

So now we have a lot of switch programming to do. Altogether, after getting each room online, then testing, etc, we could be talking about weeks of switch programming by either your outside integrator or an in-house employee familiar with this level of IT, and you’re going to need a dedicated IT person or team in place to keep it all running, and the more shifts your facility is running, the more “heavy iron” IT people you’re going to need.

IT & Systems Permissions

Network Diagram Showing Active Directory Permissions

But then, how do we manage all of these permissions? Remember, not everyone can have system-level access to everything. Only a few people are allowed to go through the machinations in order to connect a production computer to the Internet, for, say, a Pro Tools update or a plugin download. Only some groups, or people, are allowed to pull files off the backup server, or onto the main content server from the Internet-facing server system(s). In fact, not everyone is even allowed to delete something from the content server without the appropriate permissions. So, while I may be allowed to move files on or off a production computer and onto or off of a USB flash drive, or to delete a file off the server, maybe you do not have that access. All of these permissions have to be managed. 

See this content in the original post

Active Directory

Active Directory is a Microsoft thing and it’s been around forever. Put as simply as possible, you have all of your computers - production computers  (Pro Tools, Premiere, Media Composer or Da Vinci Resolve, etc), machine room computers, office computers - connected to a network server - an Active Directory server - and then your IT department programs that server with specific systems and personnel “permissions.” Business computer systems will be programmed to only allow certain people access, and the same goes for all production systems. In other words, the office receptionist probably won’t be able to login to a Pro Tools production computer or to the content network, but may have access to your business systems (Excel, database applications, etc.). Audio editors may not have access to those same business systems but only to certain Pro Tools systems - not even all of them in every room.

But it doesn’t stop there! Each of those individuals then needs to have very specific permissions granted to them, for each of the computers they are allowed to use in the first place. As ‘Head of Operations’, I may be allowed to do pretty much anything on any system, but various employees and freelancers are only allowed a subset of what I am allowed - perhaps, no copying of files to and/or from a USB flash drive, for example. There may be certain file sets that they are not allowed to move or, most commonly, they may only have access to certain partitions of the content server; and almost no one has access to the backup server. Also, every computer connected to anything needs to have a small Active Directory applet running in the background at all times. These need updating fairly often. It’s a lot for any IT department to keep track of.

All of these permissions are managed through an Active Directory server. In our facility, we use virtual machines to run various services, so a single hardware server may handle multiple, disparate services, with the added complication that each of these services might be running on different network segments. Obviously, as employees or freelancers come and go, their permissions must be granted or revoked, or put on hold. 

The Downside Of Active Directory For Media Operations

This is really complicated stuff, and none of it was built with mission-critical media operations in mind. It’s especially complicated if all you want to do is edit and mix sound, which is complicated enough. Active Directory was designed to centralize the management of mundane, office-based computer systems. This, as you know, dear reader, is not our business, like, at all.

Part of the problem with workaday office systems integrated into very highly specialized media operations like ours is the potential for failure. In a regular business environment such failures can be, but really only rarely ever are, catastrophic. This is where security measures have the potential to get in the way of our daily critical studio operations. 

Take as an example, the following scenario. Sometimes the Active Directory server needs to be rebooted, or reset. In a standard business environment, obviously, this can be complicated and dangerous, but it usually would not be catastrophic. You have not experienced terror until, one by one, the production systems around your building begin denying access to their respective users, while they were using them. One moment, you’re working in Pro Tools, writing directly to your content server every time you hit “Save”, and the next, you have zero permissions on the server. Your session cannot even be saved anymore because all of its file links have suddenly disappeared and your permission to write back to your content server is revoked. Looking on your content server’s various mounted partitions, you might see that every single folder, at every single hierarchical level, has a red “X” next to it.

And then the engineers working in your various rooms start texting and running around wondering what the hell just happened. In a small four-room facility, 15 minutes of downtime equals a full hour of forever lost, billable time. Recovery from an Active Directory malfunction never takes just 15 minutes. Systems need to be rebooted, servers re-mounted, and of course, the underlying problem fixed in the first place. The pressure on our IT personnel is enormous, because we’re all, always, on tight deadlines.

Happily, Active Directory server failures don’t happen often, but they do happen, and they’re always a nightmare, as I hope I have adequately described. Active Directory is now available in the cloud, allowing for disparate geographical diversity, so when we opened our second building, we could still use our existing AD server without having to connect physically to it. So far, this has been very stable, thankfully. But think about what has to happen to make this possible - how much systems communication, local and cloud-based.

See what just happened? Now we have an extremely expensive barrier-to-entry. A server whose only job is to manage user permissions, as well as a great deal of programming and ongoing maintenance. None of this stuff costs pennies. It all needs to be bought, installed, programmed, maintained, cooled and constantly changed and re-programmed because of employee migration.

Hollywood Politics vs Asset Security

Politics are everything in Hollywood, which means everything is negotiable. In Hollywood, all of the major studios have their own mix stages and large sound editorial departments, and often their own picture editorial departments. For them, the clampdown on content security has always been a thorny issue. What I have described in this article so far details what we had to do for a four-room, sound-only facility. Think of the large studios - maybe 50-100 Pro Tools seats and perhaps dozens of picture editorial systems? 

A typical ‘A’ mix stage in Hollywood has a two-to-three-person mix console, with between three and five Pro Tools systems dedicated to playback and mixing, as well as two to three editorial stations off to the side. Mixers rarely edit, and editors rarely mix, because these are union shops. Politics. If you need to ride a fader, that’s the mixer’s job. If a clip needs to be nudged a frame earlier, that’s the editor’s job. So that’s up to 6-10 Pro Tools systems on a single stage, for a single mix - albeit large, feature film or television mix.

Of course, they are all running off of content servers, with (maybe) 24/7 IT & support staff. Even if there were no security considerations, this would be the case because servers make content access more efficient, as we all know. But the added security measures, including audit considerations, and Active Directory considerations? When COVID-19 hit in early 2020, not everything and this after 3-4 years of the push to lock down all content. The reason for this delay was, of course, cost.

Hollywood Infrastructure Is Ugly

Most major Hollywood studios are housed in buildings that are approaching 80-100 years old at this point. Even if you wanted to spend the money securing everything, other hugely expensive issues would rear their heads as you made your way through that task. Drilling a hole in a 90-year-old wall becomes a whole other, expensive operation. There are walls inside walls. Conduit pathways that lead to nowhere. Undocumented physical and structural changes that have happened over the years. Facilities literally built on top of facilities. It’s ugly, and as you can imagine, disturbing that kind of old infrastructure is a nightmare

At Central Post L.A., our smaller building is about 4000 square feet on the main floor, and if I told you what we found when we started to gut the inside of that building in 2017, you wouldn’t believe me. Or you would believe me because you’ve been through it. That’s 4000 square feet. Imagine really big studios with millions of square feet to have to run network cable through. And switches for all of that. How many meters of cable do you think that would take? And how much it would all cost.

And so, the big studios were slow on the uptake to put content security measures in place, and who was going to tell them otherwise? They employ thousands of people. “They carry a lot of water in this town.” But of course, if you expect to be a vendor for these very studios, they require that your facility be fully compliant. In truth and fairness, it is basically impossible for the studios to even be fully compliant, as I have described, and so this is where we began to see the erosion of content security measures start to weaken, almost right out of the box eight or so years ago, but especially in the last four years. They pushed back. One of the major studios estimated that becoming fully compliant with content security measures across their acres of studio facilities would cost it around one hundred million dollars, so they just didn’t do it. They did some of it. But they didn’t spend a hundred million dollars.

And then, COVID-19 hit, and all content security went temporarily out the window, as they had to send people home to work, which they had actually been doing anyway since 1995 or so, as I have previously discussed. From a home-infrastructure point of view, it was simple. From a content security perspective, it was a nightmare.

COVID-19

This pandemic threw the entire world off-kilter, including our little world of creative content and content security. How was it going to work? I can honestly say that I’ve never seen an entire industry ditch its own protocols so fast. The task was to get everyone home, working, and then figure out security. In the audio part of our industry, people were back at work before home security measures could fully be put in place.

As editors and mixers, we pretty much self-isolate all the time anyway, don’t we? At Central Post L.A., we were lucky to be able to bring people back to our facility fairly quickly by scheduling on a 24/7 clock instead of 16/5 calendar & clock. Clients were reviewing remotely or on our largest mix/review stage so that they could stay far away from us and from each other. But, content & asset security? Please. It was the same thing for video editors. All of a sudden, productions were sending previously-secure content out to dozens of editors, assistant editors and colorists on portable RAID systems in a matter of days. There was a point when one of our client’s own production servers was spitting out content to portable RAID systems for two solid weeks. Editors would come by, sign out a portable RAID storage unit and take it home. They’d edit at home using the proxies that had been transferred to the portable drives, then the Media Composer projects would be sent back to the mothership server on our premises, where an assistant editor would spend the overnight hours linking the clips.

When everyone got sent home at first, there was an honest intention that anyone working from home would have their homes vetted for basic security, but integrators were charging so much for this service that, on medium-scale productions - say, 6 editors and 4 assistant editors across two series - at $2,000 per location for vetting - so, $24,000 - the producers just said no and gave up on content security altogether. It was the same everywhere.

Who Cares About Asset Security? It Depends

Of course, even over the last few years of asset security panic, there have still been a great many producers and content creators - most, in fact - who don’t care a fig for asset security and probably never will. And they certainly aren’t willing to pay any extra for that infrastructure in any given facility. The vast majority of content falls under this category, in fact. The networks and streaming companies care, and that’s where the security directives came from in the first place. But the producers? No. Writers? No. Same with directors. When we talk to these categories of people about asset security, it’s usually blank stares that come back. And this is important because it affects everything in terms of allowable workflow.

But all of sudden, post-COVID, we’re seeing even previously very security-conscious clients and networks loosen up their standards significantly, or entirely because work has had to get done.

Post-COVID-19 Asset Security - Where Are We Know?

So, what now? Will the world go back to the way it was? No. Will I be able to go to a restaurant again without wearing a mask? I don’t know. Not for a long time, anyway. Disneyland? Noooooooo. Just no. You might be willing to go back there. Not me. And so, will content security go back to being what it was, at least from 2017 onwards? I don't think so, at least not to the same degree it had been at. At Central Post L.A., we bifurcated, though we never strayed from the secure path as far as our TPN security-conscious clients were concerned (i.e. networks).  In regular, pre COVID times, we do not send people home with work as a matter of principle and by the request of most of our clients. When COVID hit, we needed to audit which clients cared about asset security, and which did not. We were lucky enough to have been kept busy right through the pandemic, even on-site at our facilities while maintaining safe distancing, but we did have people working from home as well. 

My guess is that we will see some return to the days where people were working from home, something that was beginning to seriously wane in the heady, security-conscious days of 2017.

Even so, by and large, playing this game, in this town, still requires high end physical and IT security infrastructure, significant and expensive buildout time, and a lot of effort and money to keep it all running smoothly.

It is, in other words, a very large barrier to entry for even medium-sized facilities, and almost a non-starter for small ones. We’ll see what happens over the next twelve months.

See this content in the original post