Upgrade to Red Hat Satellite 6.10 fails with message 'ERROR: at least one Erratum record has migrated_pulp3_href NULL value'
Environment
- Red Hat Satellite 6.9
Issue
-
Red Hat Satellite upgrade to 6.10 from Satellite 6.9.8 or below, fails at
content-switchoverstate with the following error message:Switch support for certain content from Pulp 2 to Pulp 3: Performing final content migration before switching content Checking for valid Katello configuraton. Starting task. 2022-01-04 10:40:39 -0700: Importing migrated yum repositories: 141/224 Content Migration completed successfully Performing a check to verify everything that is needed has been migrated Switching specified content over to pulp 3 [FAIL] Failed executing foreman-rake katello:pulp3_content_switchover, exit status 1: ERROR: at least one Erratum record has migrated_pulp3_href NULL value Checking for valid Katello configuraton. \-------------------------------------------------------------------------------- Scenario [Migration scripts to Satellite 6.10] failed. The following steps ended up in failing state: [content-switchover] Resolve the failed steps and rerun the command. In case the failures are false positives, use --whitelist="content-switchover -
Or, The content migration attempt in Red Hat Satellite 6.9.9 sometimes fails with the following error:
Starting task. 2022-05-26 11:38:05 +0200: Importing migrated yum repositories: 981/984 ... Repositories with the following IDs and names have unmigrated errata: 4748, Red Hat Enterprise Linux 7 Server - Extras RPMs x86_64 ... Resync these repositories and then run 'reimport_all=true foreman-maintain content prepare'.
Resolution
This issue has been resolved via Errata RHSA-2022:5498.
To install the bug fixes, It is suggested to upgrade to the latest minor release of Red Hat Satellite 6.9 and then retry the content migration process.
If some errata are still not migrated, then either of the following solutions should help to complete the migration.
Solution 1:
-
Identify that there are repository_ids related to non-migrated errata content visible. An example is given below:
# echo "Katello::RepositoryErratum.where(erratum_pulp3_href: nil).pluck(:repository_id).uniq" | foreman-rake console ... ... [40, 409, 390] -
Execute the following steps in sequence as displayed below to manually reimport the errata content.
# systemctl start pulpcore-resource-manager pulpcore-api pulpcore-content pulpcore-worker@1 pulpcore-worker@2 pulpcore-worker@3 pulpcore-worker@4 # foreman-rake console ### Let it load and then execute the following commands one by one repos = Katello::Repository.where(id: Katello::RepositoryErratum.where(erratum_pulp3_href: nil).pluck(:repository_id).uniq) default_view_repos, other_repos = repos.partition{ |repo| repo.library_instance? } ccv_repos, cv_repos = other_repos.partition{ |repo| repo.content_view.composite? } default_view_repos.each {|repo| repo.index_content } cv_repos.sort_by{ |repo| repo.archive? ? 0 : 1 }.each { |repo| repo.index_content } ccv_repos.sort_by{ |repo| repo.archive? ? 0 : 1 }.each { |repo| repo.index_content } Katello::Pulp3::Migration.class_eval do def update_import_status(message, index = nil) if index.nil? || index % 20 == 0 p message end end end migration = Katello::Pulp3::Migration.new(SmartProxy.pulp_primary, :reimport_all => true) migration.import_errata pp Katello::RepositoryErratum.where(erratum_pulp3_href: nil).size exit # systemctl stop pulpcore-resource-manager pulpcore-api pulpcore-content pulpcore-worker@1 pulpcore-worker@2 pulpcore-worker@3 pulpcore-worker@4The output of
Katello::RepositoryErratum.where(erratum_pulp3_href: nil).sizeis expected to come back0and that indicates all errata were migrated and imported. -
Retry the content migration and re-verify the status.
# satellite-maintain prep-6.10-upgrade # satellite-maintain content prepare # satellite-maintain content migration-stats
Solution 2 : [ To be attempted if Solution 1 fails to completely migrate all errata content ]
-
Run the following script on the Satellite to find the partially pre-migrated contents and fix them:
# PULP_SETTINGS=/etc/pulp/settings.py pulpcore-manager shell from collections import namedtuple from pulp_2to3_migration.app.models import Pulp2Content, MigrationPlan from pulp_2to3_migration.pulp2 import connection connection.initialize() plan = MigrationPlan.objects.all().first() plugins = list(pg for pg in plan.get_plugin_plans() if pg.type == 'rpm') plugin = plugins[0] for content_type in plugin.migrator.pulp2_content_models: ContentModel = namedtuple('ContentModel', ['pulp2', 'pulp_2to3_detail']) pulp2_content_model = plugin.migrator.pulp2_content_models[content_type] pulp_2to3_detail_model = plugin.migrator.content_models[content_type] content_model = ContentModel(pulp2=pulp2_content_model, pulp_2to3_detail=pulp_2to3_detail_model) saved_pulp2content_ids = set(content_model.pulp_2to3_detail.objects.only('pulp2content_id').values_list('pulp2content_id', flat=True)) unmigrated_pulp2contents = Pulp2Content.objects.filter(pulp3_content_id=None, pulp2_content_type_id=content_type) missing_pulp2contents = [] for unmigrated_pulp2content in unmigrated_pulp2contents: if unmigrated_pulp2content.pulp_id not in saved_pulp2content_ids: missing_pulp2contents.append(unmigrated_pulp2content) total_missing = len(missing_pulp2contents) print("Found unsaved %s: %s" % (content_type, total_missing)) if total_missing > 0: print("Saving content type: %s" % content_type) content_model.pulp_2to3_detail.pre_migrate_content_detail(missing_pulp2contents) print("Done") exit()Example output of the script:
Found unsaved rpm: 0 Found unsaved srpm: 0 Found unsaved distribution: 0 Found unsaved erratum: 87757 ### Saving content type: erratum ### Found unsaved modulemd: 0 Found unsaved modulemd_defaults: 0 Found unsaved yum_repo_metadata_file: 0 Found unsaved package_langpacks: 0 Found unsaved package_group: 0 Found unsaved package_category: 0 Found unsaved package_environment: 0As it's visible in the example, the script was able to find out some unsaved errata.
-
If the script output shows only Found unsaved errata is greater than
0as displayed in the example, then run the following to reimport all errata only.# foreman-rake console Katello::Pulp3::Migration.class_eval do def update_import_status(message, index = nil) if index.nil? || index % 20 == 0 p message end end end conf.echo = false migration = Katello::Pulp3::Migration.new(SmartProxy.pulp_primary, :reimport_all => true) migration.import_errata pp Katello::RepositoryErratum.where(erratum_pulp3_href: nil).size exitThe output of
Katello::RepositoryErratum.where(erratum_pulp3_href: nil).sizeis expected to come back0and that indicates all errata were migrated and imported. -
If the script has displayed some other unsaved contents as well apart from errata, then run the following command to migrate and reimport all contents.
# reimport_all=true satellite-maintain content prepare -
Verify the status of migration and confirm if all errata content has been now migrated or not.
# satellite-maintain content migration-stats # echo "select katello_repositories.id, katello_repositories.relative_path,count(*) from katello_repository_errata \ left join katello_repositories on katello_repositories.id = katello_repository_errata.repository_id \ where erratum_pulp3_href is null group by katello_repositories.id;" | su - postgres -c "psql foreman" # echo "Katello::RepositoryErratum.where(erratum_pulp3_href: nil).pluck(:repository_id).uniq" | foreman-rake console
-
If the command output of
satellite-maintain content migration-statsreflects that Migrated errata count is less than Total errata count, then-
Open up a support case with This content is not included.Red Hat Technical Support.
-
Collect a fresh sosreport from the affected Satellite server and upload it on the support case.
-
Ensure to attach the output of all the commands executed earlier for the Red Hat Support to review.
Or else, If everything is properly migrated, proceed with the upgrade of the Red Hat Satellite server to 6.10.
-
Solution 3 : [ To be attempted if Solution 2 didn't find any unsaved errata ]
-
Run the following command to investigate the unmigrated errata:
foreman-rake console << EORAKE | tee unmigrated_errata_repo_map.txt conf.echo = false; pp Katello::RepositoryErratum.where(erratum_pulp3_href: nil).each_with_object(Hash.new{|k,v| k[v] = []}) {|re,o| o[re.erratum.errata_id] << re.repository.version_href}; exit; EORAKE -
Open the
unmigrated_errata_repo_map.txtfile created by the script above. If you see /rpm/repo_uuid/versions/0/ (repository with zero version or empty repository) similar to the output below, it means for some reasons many contents are not migrated or they are migrated but not associated to the Pulp 3 repositories."RHBA-2016:1214"=> ["/pulp/api/v3/repositories/rpm/rpm/6c54f4c5-4b31-4adb-9406-03eea58ea708/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/49a40023-4e3a-4826-bf6b-9cea591e4ca2/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/9fb0224b-4046-4a42-8edf-287eaca3595c/versions/0/"], "RHBA-2017:2856"=> ["/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/"] "RHEA-2017:3210"=> ["/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/a50a292c-aa6b-46b6-91b8-3865988edb58/versions/0/", "/pulp/api/v3/repositories/rpm/rpm/9fc0ceb5-ee5a-4e4e-830b-1c8045a7dbcc/versions/0/"] -
If you are hitting this issue, the best way to solve it is to reset the migration and start from the beginning in order to ensure the integrity of the migrated data.
satellite-maintain content migration-reset reimport_all=true satellite-maintain content prepare -
It is still unknown how this issue could happen. One of the possible reasons is the migration process got interrupted several times in the middle due to error or by user intervention causing the migration data to be in the bad state.
NOTE: There could be multiple underlying reasons behind this issue and it's best to proceed further with the guidance from Red Hat Technical Support.
For more KB articles/solutions related to Red Hat Satellite 6.x Installation/Upgrade/Update Issues, please refer to the Red Hat Satellite Consolidated Troubleshooting Article for Red Hat Satellite 6.x Installation/Upgrade/Update Issues.
For more KB articles/solutions related to Red Hat Satellite 6.x Pulp 3.0 Issues, please refer to the Consolidated Troubleshooting Article for Red Hat Satellite 6.x Pulp 3.0-related Issues
Root Cause
-
The erratas which will not be migrated , will have no
pulp3_hrefvalue set either and due to the same content-switchover step will fail during the upgrade process. -
The Postgres queries mentioned in the article helps to determine, what Repo\CV\LCE are related to those un-migrated erratas.
Diagnostic Steps
-
Identify if all ERRATA and RPMS have been migrated or not.
# satellite-maintain content migration-stats | grep ^Migrated Migrated/Total RPMs: 350834/350837 ---> Not All Rpms are migrated and could be related to MISSING\CORRUPTED RPMs issue Migrated/Total errata: 1550596/1550599 ---> Exactly 3 ERRATAs are not migrated Migrated/Total repositories: 2171/2171 -
Verify if the count of non-migrated ERRATA matches with the count of ERRATA with no
pulp3_hrefvalue.# echo "select count(*) from katello_repository_errata where erratum_pulp3_href is null;" | su - postgres -c "psql foreman" count \------- 3 (1 row) -
Identify the source (Repo\CV\LCE) of those un-migrated ERRATAs.
# echo "select katello_repositories.id, katello_repositories.relative_path,count(*) from katello_repository_errata \ left join katello_repositories on katello_repositories.id = katello_repository_errata.repository_id \ where erratum_pulp3_href is null group by katello_repositories.id;" | su - postgres -c "psql foreman" id | relative_path | count \------+--------------------------------------------------------------------------------+------- 105 | Egaming/Library/custom/Extra_Packages_for_Enterprise_Linux/EPEL_7Server_x86_64 | 2 6777 | Egaming/Library/custom/Extra_Packages_for_Enterprise_Linux/EPEL_8Server_x86_64 | 1 (2 rows)
This solution is part of Red Hat’s fast-track publication program, providing a huge library of solutions that Red Hat engineers have created while supporting our customers. To give you the knowledge you need the instant it becomes available, these articles may be presented in a raw and unedited form.