Recently daemon crash happened for 2 OSDs at same time on different nodes Its working fine but we are seeing frequent OSD daemon crash in 3-4 daysĪnd restarts without any problem also we are seeing flapping osds i.e osd We are running rook-ceph deployed as a operator in kubernetes with rook Jozef Stefan Institute, Jamova 39, P.o.Box 3000 Andrej Filipcic, E-mail: Andrej.Filipcic(a)ijs.siĭepartment of Experimental High Energy Physics - F9 If I try to remove lost+found, mds crashes again.ĭo you have any hint how to recover from this? Scrub there, mds crashes again, maybe because of corrupted lost+found. Several releases before I never managed to fix), and if I start mds There are few corrupted files in some other directories ( leftovers from Then I started 18 mds again, which soon after startup finds this corruption: Unsigned int, const file_layout_t*)' thread 7f7176d7f6c0 timeģ441: FAILED ceph_assert(_inode->gid != (unsigned)-1)Īnd I could not bring it back again. Ion 'CInode* Server::prepare_new_inode(MDRequestRef&, CDir*, inodeno_t, Handle_mds_map state change up:clientreplay -> up:active I just upgraded from 17.2.6 to 18.2.1 and have some issues with mds.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |