Issue mounting worker pvc

I am having issues spinning up new workers sometimes in AWS. The cluster will create the PVC but when the worker starts I get the following error message.

Events:
  Type     Reason                  Age   From                     Message
  ----     ------                  ----  ----                     -------
  Normal   Scheduled               54s   default-scheduler        Successfully assigned concourse/concourse-worker-4 to ip-10-41-93-202.ec2.internal
  Normal   SuccessfulAttachVolume  51s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-e57d1b14-e12c-4c2c-bf51-471a6c2f5f78"
  Warning  FailedMount             45s   kubelet                  MountVolume.MountDevice failed for volume "pvc-e57d1b14-e12c-4c2c-bf51-471a6c2f5f78" : failed to mount the volume as "ext4", it already contains unknown data, probably partitions. Mount error: mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1b/vol-0a0c3ecd94d835e08 --scope -- mount -t ext4 -o defaults /dev/nvme1n1 /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1b/vol-0a0c3ecd94d835e08
Output: Running scope as unit: run-r861069349e344adbbfa915567b3d8981.scope
mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/us-east-1b/vol-0a0c3ecd94d835e08: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.****

Normally this will go away finally after a few times of deleting the pvc and recreating it, but it’s been several hours and it still can’t mount the volume. Any ideas on what I could try to resolve this issue? Running v6.4.1