Search Engine: Elastic

Article ID: 118711, created on Nov 18, 2013, last review on May 6, 2014


The pstorage mount point exists, but any attempt to access it hangs.

Commands like df or ls /pstorage/<clustername> hang completely, and it is required to reset the terminal to get it working again.

Similar I/O errors can be found in dmesg:

    [1173444.675118] fuse_aio_complete_req: request (rw=WRITE fh=0x7f52cc01bfa0 pos=163258368 size=32768) completed with err=-103
    [1173444.675125] kaio_rw_aio_complete: kaio failed with err=-103 (rw=WRITE; state=1/0x0; clu=7; iblk=155; aux=-1)
    [1173444.675128]  bio=ffff881001dbb680: bi_sector=15760 bi_size=4096
    [1173444.675155] Buffer I/O error on device ploop45253p1, logical block 1714
    [1173444.675158] lost page write due to I/O error on ploop45253p1

The node has a lot of D-state processes.

Containers show ext4 errors and their filesystem is remounted in read-only mode.

How do I bring the mount back to life?


pstorage-mount process crashed.

The causes for the problem may vary, depending on the exact errors found in /var/log/pstorage/<clustername>/pstorage-mount.log.gz.


The following procedure helps to bring back the mount point and guarantee the consistency of the file system inside the containers:

1) Stop the Parallels Cloud Server services:

    # service vz stop
    # service parallels-server stop

2) Find the processes that use the pstorage mount point and try to kill them gracefully:

    # for pid in `lsof 2>/dev/null | grep ' /pstorage/<clustername>/' | awk '{print $2}'` ; do kill -SIGINT $pid ; done

If some processes do not react to graceful termination, kill them forcibly:

    # for pid in `lsof 2>/dev/null | grep ' /pstorage/<clustername>/' | awk '{print $2}'` ; do kill -SIGKILL $pid ; done

3) Restart pstorage-fs service:

    # service pstorage-fs restart

4) Start the Parallels Cloud Server services back:

    # service vz start
    # service parallels-server start

Email subscription for changes to this article
Save as PDF