Leapp can fail when there are too many LV partitions
Environment
- Red Hat Enterprise Linux 7.9
- Red Hat Satellite 6.11
- IPU 7->8
leapp
Issue
Leapp ends with the below traceback:
File "/usr/lib/python2.7/site-packages/leapp/utils/audit/__init__.py", line 60, in create_connection
return _initialize_database(sqlite3.connect(path))
OperationalError: unable to open database file
Resolution
-
Check the diagnostic step first
-
Clean up the system
# for mp in `mount | awk '/leapp.scratch/ {print $1}' | grep scratch`; do umount -vl $mp; done # rm -rf /var/lib/leapp/* -
Try to comment out from
/etc/fstabany partition which is not required for the in-place upgrade (typically: /opt, /app...). You will be able to activate afterwards, once the OS have been upgraded. -
If issue persists, check this solution article, "Why does leapp preupgrade fail with
sqlite3.OperationalError: unable to open database filetraceback error ?"
Root Cause
This issue has been observed with more than 30 Logical Volumes (LVM) activated at the same time. It is tracked in the following bugzilla: This content is not included.Bug 2143277 - Leapp can fail when there are too many LV partitions
Diagnostic Steps
- Check you have the same traceback than the one mentioned in the "Issue" section.
- Check also if you have more than 30 LVs in
/etc/fstab. - Check the partitions in
/var/lib/leapp/scratchare still mounted withmount.
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.