Startseite
Bild
Bild
  • ready to use and comfortable ZFS storage appliance for iSCSI/FC, NFS and SMB
  • Active Directory support with Snaps as Previous Version
  • user friendly Web-GUI that includes all functions for a sophisticated NAS or SAN appliance.
  • commercial use allowed
  • no capacity limit
  • free download for End-User


Bild
  • Individual support and consulting
  • increased GUI performance/ background agents
  • bugfix/ updates/ access to bugfixes
  • extensions like comfortable ACL handling, disk and realtime monitoring or remote replication
  • appliance diskmap, security and tuning (Pro complete)
  • Redistribution/Bundling/setup on customers demand optional
please request a quotation.
Details: Featuresheet.pdf

SSD as Lesecache (L2ARC)


Reading data from disk is very slow as they allow only a few hundred up to a few thousand io operations per second.
ZFS uses a very sophisticated readcache called ARC to improve performance with multiple reads. On large arrays or with many users, the cache hit rate may be not good enough (can be checked with arcstat).

In such a case, you can add a fast SSD as an additional L2ARC to extend the ARC cache in size. In some cases, this can increase read performance. But the SSD is much slower than the SSD and you need some RAM  for the SSD index. In many when not all cases, more RAM is the better solution. Think about L2ARC SSDs only if you have maxed out RAM.

SSD as dedicated logdevice (ZIL)


This is a feature for secure write only. On a default ZFS filer, all write operations are commited and collected for some seconds in RAM and then written to disk to improve performance with this conversion of many small random writes to one large sequential write. On a crash or power outage, these write operations are lost. This does not affect a Copy On Write filesystem like ZFS (keeps always consistent), it affects applications that need secure transactions or that contain older filesystems (ex ext4 or ntfs virtual disks).  In these cases, each commited write must be really on disk and safe. To ensure this, you can use secure sync write. This setting is a ZFS property.

When you enable sync, every single write operation is logged to a ZIL device. In parallel the regular performance oriented write mechanism keeps active. Without a dedicated ZIl, the pool itself is used for logging (Onpool-ZIL) what means that every data must be written to pool twice, one time secure and slow and one time fast. This is because a ZIL is not a Write cache but a logging device that is only used after a crash to redo last writes not yet on disk.

If you use a dedicated ZIL device that should be much faster than your regular pool, you can combine regular write performance with fast sync logging. As a ZIL must log only about twice the data that are delivered over your network between a disk flush, capacity is uncritical (min 8-16GB)

Criteria for a good ZIL
- Very low latence and high write IOPS/ performance
- robustness against power outage (should have a battery backup od a powercap)
- robustness for many writes

Perfect suited ZIL:
- HGST ZeusRam (Dram-based, 3,5" SAS with 8GB)
- HGST S840Z, ZIL (2,5" SAS, ZIL optimized SSD, 16 GB)
- Intel S3700 (regular enterprise SSD but quite affordable for a ZIL)


Do not use cheap consumer SSDs or not write optimized SSDs or SSDs without a supercap

Partitioning a SSD for ZIL and L2ARC is possible
With regular SSDs, overprovision new SSDs, ex use only 40% and block the rest with a host protected area or a partition

napp-it 27.12.2023