VM-Aware Storage for the Cloud

HTFS management takes a fraction of the time and cost of conventional storage, and allows organizations to guarantee the performance of their most critical applications

Virtualization and Cloud Storage

Since businesses started to rely on virtualized servers, they needed a storage platform that can guarantee the performance of their infrastructure.

HTFS brings the capability of managing virtual machines disks as single units of management, where Quality of Service can be adjusted individually, to best suit performance and expectations.

With a scale-out model, HTFS allows organizations to start small and scale to petabytes of data while serving multiple hypervisors.

Benefits of a VM-Aware Storage

HTFS manages each virtual machine disk individually, eliminating the conflict over resources across your network. With that, system admins can set the exact quality of service to each VM

While performance is handled by QoS, HTFS eliminates the need to manage LUNs and volumes where virtual disks are now the base for your virtual machines.

As you grow your infrastructure and more storage is required, you can simply add disks and your storage pool will automatically expand, without adding any complexity.

Some Highlights

  • Manage multiple VM disks from a single pane of glass
  • Snapshots and clone are native functionalities
  • Replicate any specific VM directly from the dashboard
  • Easy to scale infrastructure without adding complexity
  • Complete and detailed statistics of your storage layer

HTFS at Work

HTFS is a distributed, scalable, fault-tolerant and highly available file system. It allows users to combine disk space located on many servers into a single name space. HTFS makes files secure by keeping all the data in many replicas spread over available servers.

HTFS is used as the Software-Defined Storage (SDS) layer for HyperTask, allowing organizations to build an affordable storage, because it runs on commodity hardware.

Disk and server failures are handled transparently without any downtime or loss of data. If storage requirements grow, you can easily scale an existing HTFS installation just by adding new servers or individual disks - at any time, without any downtime. The system will automatically move some data to the newly added servers or disks, because it continuously takes care of balancing disk usage across all connected nodes. Removing servers or disks is as easy as adding new ones.


If storage requirements grow, an existing HTFS installation can be scaled just by adding new data servers, without any downtime.


Performance of the system scales linearly with the number of disks in the system, so adding a new dataserver will not only increase available capacity, but also the overall performance of the storage.


Copying large files and directories (eg. virtual machines) can be done efficiently with HTFS snapshot feature.

Quality of Service

HTFS offers mechanisms which allow system administrators to set read/write, bandwidth limits and more for all the traffic generated by a given VM.

Watch a Demo Now!

See for yourself how OCH and HyperTask can transform your IT infrastructure.