Skip to content
Snippets Groups Projects
Commit f5d5e816 authored by David Hildenbrand's avatar David Hildenbrand Committed by Greg Kroah-Hartman
Browse files

nouveau/svm: fix missing folio unlock + put after make_device_exclusive_range()


[ Upstream commit b3fefbb30a1691533cb905006b69b2a474660744 ]

In case we have to retry the loop, we are missing to unlock+put the
folio. In that case, we will keep failing make_device_exclusive_range()
because we cannot grab the folio lock, and even return from the function
with the folio locked and referenced, effectively never succeeding the
make_device_exclusive_range().

While at it, convert the other unlock+put to use a folio as well.

This was found by code inspection.

Fixes: 8f187163 ("nouveau/svm: implement atomic SVM access")
Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
Reviewed-by: default avatarAlistair Popple <apopple@nvidia.com>
Tested-by: default avatarAlistair Popple <apopple@nvidia.com>
Signed-off-by: default avatarDanilo Krummrich <dakr@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20250124181524.3584236-2-david@redhat.com


Signed-off-by: default avatarSasha Levin <sashal@kernel.org>
parent 179831a6
No related branches found
No related tags found
No related merge requests found
...@@ -590,6 +590,7 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, ...@@ -590,6 +590,7 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
unsigned long timeout = unsigned long timeout =
jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT); jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
struct mm_struct *mm = svmm->notifier.mm; struct mm_struct *mm = svmm->notifier.mm;
struct folio *folio;
struct page *page; struct page *page;
unsigned long start = args->p.addr; unsigned long start = args->p.addr;
unsigned long notifier_seq; unsigned long notifier_seq;
...@@ -616,12 +617,16 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, ...@@ -616,12 +617,16 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
ret = -EINVAL; ret = -EINVAL;
goto out; goto out;
} }
folio = page_folio(page);
mutex_lock(&svmm->mutex); mutex_lock(&svmm->mutex);
if (!mmu_interval_read_retry(&notifier->notifier, if (!mmu_interval_read_retry(&notifier->notifier,
notifier_seq)) notifier_seq))
break; break;
mutex_unlock(&svmm->mutex); mutex_unlock(&svmm->mutex);
folio_unlock(folio);
folio_put(folio);
} }
/* Map the page on the GPU. */ /* Map the page on the GPU. */
...@@ -637,8 +642,8 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm, ...@@ -637,8 +642,8 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL); ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL);
mutex_unlock(&svmm->mutex); mutex_unlock(&svmm->mutex);
unlock_page(page); folio_unlock(folio);
put_page(page); folio_put(folio);
out: out:
mmu_interval_notifier_remove(&notifier->notifier); mmu_interval_notifier_remove(&notifier->notifier);
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment