Geekzone: technology news, blogs, forums
Guest
Welcome Guest.
You haven't logged in yet. If you don't have an account you can register now.


Yoban

453 posts

Ultimate Geek
+1 received by user: 86


#316193 24-Sep-2024 12:20
Send private message

Hi all,

 

Been running unraid on bare metal for a number of years now (thanks GZers) and am now wanting to utilise the current hardware fro son's uni and self-learning (kubernetes etc.) to assist with job market.
So have installed proxmox and have setup a VM for unraid which in general has been successful - boots fine from USB and also can see all disks (HBA card with 8xHDD, 2x nvme) via the pcie pass through option.

 

my issue is for the 3 disks that are ZFS (2xnvme as cache, 1xHDD in array), all have the error message "ZFS Drives Unmountable: Unsupported or no file system". Issue does not happen on XFS disks it seems. It seems that the partition table for these disks is getting corrupt via the pass through process....as I have seen the table type become "msdos"

 

I have read a number of posts in both proxmox and unraid forums with little success in preventing this when I restart the Unraid VM. Unraid forum post said it should not happen on 6.12.5 and higher....I am running 6.12.6.

I have also tried differing bios's too, but no success.

I am wondering if I am passing through correctly - should I do a physical disk pass through (Passthrough Physical Disk to Virtual Machine (VM) - Proxmox VE) for these disks? (could be an issue for the HDD which is on the HBA card as would need to pass through all disks separately.
How can I prevent this as I have already had to reformat a couple of times?

Thanks in advance.

Paul

 

PCI Device 0 = HBA card, PCI Device 1 = NVME disk (Samsung), PCI Device 2 = NVME Disk (WD)

 


Create new topic
GARBAGE
48 posts

Geek
+1 received by user: 13

ID Verified

  #3285800 24-Sep-2024 13:37
Send private message

Can you show the output of `lspci` for me?




Yoban

453 posts

Ultimate Geek
+1 received by user: 86


  #3285812 24-Sep-2024 13:48
Send private message

GARBAGE:

 

Can you show the output of `lspci` for me?

 


from proxmox

 

from unraid - seems I have "lost" nvme too now...



Yoban

453 posts

Ultimate Geek
+1 received by user: 86


  #3285817 24-Sep-2024 14:00
Send private message

Well I may have found the culprit - me being the noobie in this space.
I tweaked the VM settings in proxmox to include a check in the PCI box when attaching/passing the NVME drives and as you can see theay are all present with ZFS and not error. I also checked the "full function box too, if that has helped.


 

Unraid view


My test will be to format the disk in the array back to ZFS and see what happens - likely I need to do the same

Thoughts from the community? Should it be passed through at disk level?


Create new topic








Geekzone Live »

Try automatic live updates from Geekzone directly in your browser, without refreshing the page, with Geekzone Live now.



Are you subscribed to our RSS feed? You can download the latest headlines and summaries from our stories directly to your computer or smartphone by using a feed reader.