Well, moving them is out of the question, since, you know, motion will change the clocks time. If you re-sync them, you bake the “error” into your framework. If you try a timer, the timer is offset. If you try and propagate a signal, the signal is offset. And eventually, you have to compare the two times, which muddies the waters by introducing a third clock.
Basically, there is no way to sync two clocks without checking both clocks, ergo, no way of proving or disproving. That’s the premise.
In practicality, I assume it is constant, but it’s like N=NP. You can’t prove it within the framework, even if you really, really want to believe one thing.
If you move one clock very slowly away from the other, the error is minimised, perhaps even to a degree that allows for statistically significant measurements.
To cite the Wikipedia entry that one of the other commenters linked:
“The clocks can remain synchronized to an arbitrary accuracy by moving them sufficiently slowly. If it is taken that, if moved slowly, the clocks remain synchronized at all times, even when separated, this method can be used to synchronize two spatially separated clocks.”
Unfortunately, if the one-way speed of light is anisotropic, the correct time dilation factor becomes , with the anisotropy parameter κ between -1 and +1.[17] This introduces a new linear term, (here ), meaning time dilation can no longer be ignored at small velocities, and slow clock-transport will fail to detect this anisotropy. Thus it is equivalent to Einstein synchronization.
Yes, I understand that part, but it doesn’t disprove that such an experiment could show isotropy. Instead, it says that it would always indicate isotropy, which is not entirely useful either, of course. I’ll dig deeper into the publication behind that section when I have the time. Nonetheless, my original point still stands. With a highly synchronised clock, you could measure the (an)isotropy of the one-way speed of light. To determine whether the time dilation issue is surmountable I’ll have to look at the actual research behind it.
Well, moving them is out of the question, since, you know, motion will change the clocks time. If you re-sync them, you bake the “error” into your framework. If you try a timer, the timer is offset. If you try and propagate a signal, the signal is offset. And eventually, you have to compare the two times, which muddies the waters by introducing a third clock.
Basically, there is no way to sync two clocks without checking both clocks, ergo, no way of proving or disproving. That’s the premise.
In practicality, I assume it is constant, but it’s like N=NP. You can’t prove it within the framework, even if you really, really want to believe one thing.
If you move one clock very slowly away from the other, the error is minimised, perhaps even to a degree that allows for statistically significant measurements.
To cite the Wikipedia entry that one of the other commenters linked:
“The clocks can remain synchronized to an arbitrary accuracy by moving them sufficiently slowly. If it is taken that, if moved slowly, the clocks remain synchronized at all times, even when separated, this method can be used to synchronize two spatially separated clocks.”
One-Way Speed of Light
Except if you continue reading beyond your Quote, it goes on to explain why that actually doesn’t help.
And further down:
Yes, I understand that part, but it doesn’t disprove that such an experiment could show isotropy. Instead, it says that it would always indicate isotropy, which is not entirely useful either, of course. I’ll dig deeper into the publication behind that section when I have the time. Nonetheless, my original point still stands. With a highly synchronised clock, you could measure the (an)isotropy of the one-way speed of light. To determine whether the time dilation issue is surmountable I’ll have to look at the actual research behind it.