Best Practices 8 min read by syncopio Team

The NFSv4 Timestamp Bug: Why Your File Dates Don't Survive Migration

NFSv4 silently overwrites file timestamps during write-back processing, affecting rsync, cp -p, and every migration tool. Here's what's going on and what to do about it.

You copy 500 files to an NFS share. Sizes match. Hashes match. Your tool says timestamps were preserved. You move on.

Except the timestamps are wrong. Every single one. And nothing told you.

We found this during a routine migration test. The client-side stat showed the correct dates, the ones we’d carefully preserved from the source. But when we checked the same files directly on the NFS server, every modification time had been quietly replaced with the time of the transfer.

This isn’t a bug in rsync. It isn’t a bug in cp, tar, or any other tool. It’s baked into how NFSv4 handles file writes. Every copy tool on earth produces the same result when writing to an NFSv4 destination.

Who's affected?

Anyone copying files to an NFSv4 share where original file dates matter: backups, migrations, compliance archives, incremental syncs. If your destination mount uses NFSv4, your timestamps may already be wrong.

Enter your email to read the fix and full technical analysis

We won't spam you. Unsubscribe any time.

The one-line fix

Switch your destination mount to NFSv3:

mount -t nfs -o vers=3,rw,hard,noatime 192.168.1.100:/export /mnt/dest

That’s it. NFSv3 doesn’t have the mechanism that causes this. Problem gone.

If you’re running rsync, just remount the destination with vers=3. If you’re using syncopio, you don’t need to do anything. It detects NFSv4 destinations automatically and switches to NFSv3 when timestamp preservation is enabled. No configuration required.

syncopio advantage

syncopio handles this automatically. When timestamp preservation is on and the destination is NFS, it forces NFSv3 for the destination mount. Source mounts stay on NFSv4. You don’t touch a thing.

How to check if you’re affected

1. What NFS version is your destination using?

mount | grep nfs

If you see vers=4 or nfs4 on the destination, keep reading.

2. Are your timestamps actually correct?

Don’t check from the client. The client lies. It caches the old value and shows you what you expect to see.

Check from the server directly, or remount with noac to bypass the cache:

mount -o remount,noac /mnt/nfs-dest
stat /mnt/nfs-dest/somefile.txt

If the modification time shows the time of your last copy instead of the original date, you’ve been hit.


That’s everything you need to fix it. The rest of this post is the rabbit hole: how we found it, why it happens, and why even Linux 6.17 doesn’t help.


How we found it

We were testing an NFS-to-NFS migration: 505 files across 26 directories with timestamp preservation enabled. Transfer completed. Checksums matched. Client-side stat showed correct timestamps. Everything green.

Then we checked the NFS server. Every file had the wrong date. The modification times we’d set were gone, replaced with the exact time of the transfer.

Our first thought: we’re doing something wrong. Maybe we’re not flushing writes correctly before setting the timestamp. We fixed a real issue there (syncing the wrong file descriptor), but it didn’t solve the timestamp problem.

So we tried other tools:

rsync -a source.txt /mnt/nfs-dest/
cp -p source.txt /mnt/nfs-dest/
touch -r source.txt /mnt/nfs-dest/source.txt

Same result. Every tool. Correct timestamps on the client, wrong timestamps on the server.

That ruled out our code. This was protocol-level.

Why it happens

NFSv4 introduced a feature called delegations. When you open a file for writing, the server can hand your client exclusive control. “Here, you deal with it, just give it back when you’re done.” The client buffers writes locally instead of sending them to the server immediately. This is great for performance.

Here’s where it breaks:

  1. You write data to the file, which stays in client memory
  2. You close the file, and the client starts returning the delegation
  3. You set the timestamp to the original source time. The client sends SETATTR to the server, which sets the correct date
  4. The server processes the close and flushes those buffered writes
  5. Those write operations update the modification time to now

Step 5 undoes step 3. The server set the timestamp you asked for, then immediately overwrote it because the delayed writes arrived after your timestamp change.

This isn't a tool bug

rsync, cp, tar, Go, Python, shell touch: they all hit the same wall. The NFSv4 protocol itself overwrites your timestamp as a side effect of delayed write processing. No amount of application-level code can prevent it.

Why you don’t notice

This is the worst part. NFS clients cache file attributes aggressively. When your tool sets a timestamp, the client updates its local cache with the value it just sent. Every subsequent stat call reads from cache, not from the server. So your verification passes. Your tool reports success. Everything looks fine.

The server has already overwritten the timestamp, but the client won’t know until the cache expires, typically 30 to 60 seconds later. By then, your tool has moved on to the next file.

Your verification step passes even though the data is wrong.

Why directories survive

Directories don’t have pending writes. No buffered data means no delayed flush means no timestamp overwrite. That’s why directory dates come through fine while file dates don’t, and why you might miss this entirely if you only spot-check folders.

Why NFSv3 works

NFSv3 has no delegations. Writes go straight to the server. When you set a timestamp after closing a file, there’s no deferred write-back racing to overwrite it. What you set is what you get.

We ran the same 505-file test suite over NFSv3:

  • 505 of 505 files: correct at server level
  • 26 of 26 directories: correct at server level

Zero failures. Same files, same code, same server. The only difference was the protocol version.

”Just upgrade the kernel”

You’d think a kernel update would fix this. There is a relevant patchset. Jeff Layton submitted an 8-patch series in July 2025 addressing delegated timestamp handling in the NFS server daemon:

nfsd: freeze c/mtime updates with outstanding WRITE_ATTRS delegation

But here’s the catch: that fix targets NFSv4.2+ attribute delegations, a newer feature where the server explicitly hands timestamp management to the client. That’s a different mechanism than the regular write delegations that cause our problem.

We tested on Linux 6.17 (Proxmox PVE kernel), with both client and server on the same host. The bug persists.

No kernel fixes this

The timestamp overwrite happens during regular NFSv4 write delegations, which have been the default since NFSv4 was introduced. The kernel patch fixes the newer NFSv4.2 attribute delegations, which are a different thing entirely. For regular write delegations, updating mtime on write is the intended protocol behavior. It’s not a bug. No kernel version changes it.

This isn’t going away. Every kernel ever shipped has this behavior, and every future kernel will too, because it’s how the protocol was designed. NFSv3 isn’t a temporary workaround. It’s the correct choice for write destinations where timestamps matter.

What to do

  1. Destination mounts: use NFSv3. Add vers=3 to your mount options. This is the fix. It’s reliable, it’s simple, and it works on every kernel.
  2. Source mounts: NFSv4 is fine. The bug only affects writes. Reading from NFSv4 works perfectly.
  3. Don’t trust client-side stat. If you need to verify timestamps, check the server directly or mount with noac.
  4. NFSv4 is still great for reads. Kerberos support, single-port firewall rules, delegation performance for read-heavy workloads. Just don’t use it for write destinations in metadata-sensitive workflows.

Further reading:

References:

Ready to simplify your migrations?

See how syncopio can save you hours on every migration project.

Request a Demo