bristolrecords
Member
Hi,
I'm putting this here in this forum, because it seems the closest place to 'correct'. Admin's please let me know if there's a better place.
I've more recently become involved with smaller scale film making and the associated equipment. Timecode sync between cameras and location audio is being addressed a little better now, than a few years ago. Decent, temperature compensated crystals are getting more commonplace and several manufacturers are now making timecode devices that can wirelessly sync with one another. There are several systems, from all the well known specialists, several of which have their own proprietory system for wirelessly networking, so, we have a common sync method (LTC) accompanied by wireless incompatabilities. Great!
So, the normal topology of timecode sharing between devices in studios was a master clock syncing with other devices via cable, often recording timecode to a sync track in the case of linear media.
Multiple devices could jam sync and maintain good timing for long periods, each with their own timecode clocks.
In the case of digital media, timecode tracks are embedded in metadata, but it's possible to read and write LTC recorded as an audio track.
In the case of my audio and video editing software (Davinci Resolve Studio) it's possible to select either the embedded timecode metadata from video, or to read an audio track.
I'm floating this because (after a moment of horizontal thinking, it occurs to me that it would perhaps be useful to use a slightly different topology in the case of location work, where multiple cameras and audio devices would gain advantage from continuous sync. I'm hoping that maybe there are kind folk here at Group DIY, who might have some insight into this subject.
My suggestion would be, for the purposes of sync, to have one central clock transmitting timecode on a one-to-many basis, in a similar way that the old analog mic transmitters did, each device recording the timecode to a separate track, as devices also used to.
I'm sure that this would work. Digital devices all have fairly accurate clocks, certainly good enough to maintain sync for a few minutes at a time.
The common problem with location devices, is transmission becoming intermittent, as I'm sure many of us have experienced.
The time code track should be expected to have gaps in the LTC in a system such as the one I'm describing.
My interest is to understand how LTC systems handle drift and loss of signal. To be clear, I'd like to understand if it might be possible to make use of LTC tracks recorded with gaps in the LTC with accuracy, in post production, by accomodating for the gaps and the minor drift in software, after the fact.
If this could work reliably, timecode sync on locations could become very useful in a few ways:
Lav and location mics could all record their own audio and transmit only for the purpose of monitoring, which is less mission critical, unless live transmission is under way.
Locally recorded audio can be recorded more reliably. Almost no chance of dropouts and with 32bit recording, no chance of peaking.
Copies of the very same timecode track could be compared by editing software, perhaps with the potential to accommodate and perhaps even repair the timecode tracks in post production.
My question to those who might respond to this:
Am I barking up entirely the wrong tree, or does this make sense...
and why?
I'm putting this here in this forum, because it seems the closest place to 'correct'. Admin's please let me know if there's a better place.
I've more recently become involved with smaller scale film making and the associated equipment. Timecode sync between cameras and location audio is being addressed a little better now, than a few years ago. Decent, temperature compensated crystals are getting more commonplace and several manufacturers are now making timecode devices that can wirelessly sync with one another. There are several systems, from all the well known specialists, several of which have their own proprietory system for wirelessly networking, so, we have a common sync method (LTC) accompanied by wireless incompatabilities. Great!
So, the normal topology of timecode sharing between devices in studios was a master clock syncing with other devices via cable, often recording timecode to a sync track in the case of linear media.
Multiple devices could jam sync and maintain good timing for long periods, each with their own timecode clocks.
In the case of digital media, timecode tracks are embedded in metadata, but it's possible to read and write LTC recorded as an audio track.
In the case of my audio and video editing software (Davinci Resolve Studio) it's possible to select either the embedded timecode metadata from video, or to read an audio track.
I'm floating this because (after a moment of horizontal thinking, it occurs to me that it would perhaps be useful to use a slightly different topology in the case of location work, where multiple cameras and audio devices would gain advantage from continuous sync. I'm hoping that maybe there are kind folk here at Group DIY, who might have some insight into this subject.
My suggestion would be, for the purposes of sync, to have one central clock transmitting timecode on a one-to-many basis, in a similar way that the old analog mic transmitters did, each device recording the timecode to a separate track, as devices also used to.
I'm sure that this would work. Digital devices all have fairly accurate clocks, certainly good enough to maintain sync for a few minutes at a time.
The common problem with location devices, is transmission becoming intermittent, as I'm sure many of us have experienced.
The time code track should be expected to have gaps in the LTC in a system such as the one I'm describing.
My interest is to understand how LTC systems handle drift and loss of signal. To be clear, I'd like to understand if it might be possible to make use of LTC tracks recorded with gaps in the LTC with accuracy, in post production, by accomodating for the gaps and the minor drift in software, after the fact.
If this could work reliably, timecode sync on locations could become very useful in a few ways:
Lav and location mics could all record their own audio and transmit only for the purpose of monitoring, which is less mission critical, unless live transmission is under way.
Locally recorded audio can be recorded more reliably. Almost no chance of dropouts and with 32bit recording, no chance of peaking.
Copies of the very same timecode track could be compared by editing software, perhaps with the potential to accommodate and perhaps even repair the timecode tracks in post production.
My question to those who might respond to this:
Am I barking up entirely the wrong tree, or does this make sense...
and why?