Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

dma-buf: Add dma_buf_attach_revocable()

Some exporters need a flow to synchronously revoke access to the DMA-buf
by importers. Once revoke is completed the importer is not permitted to
touch the memory otherwise they may get IOMMU faults, AERs, or worse.

DMA-buf today defines a revoke flow, for both pinned and dynamic
importers, which is broadly:

dma_resv_lock(dmabuf->resv, NULL);
// Prevent new mappings from being established
priv->revoked = true;

// Tell all importers to eventually unmap
dma_buf_invalidate_mappings(dmabuf);

// Wait for any inprogress fences on the old mapping
dma_resv_wait_timeout(dmabuf->resv,
DMA_RESV_USAGE_BOOKKEEP, false,
MAX_SCHEDULE_TIMEOUT);
dma_resv_unlock(dmabuf->resv, NULL);

// Wait for all importers to complete unmap
wait_for_completion(&priv->unmapped_comp);

This works well, and an importer that continues to access the DMA-buf
after unmapping it is very buggy.

However, the final wait for unmap is effectively unbounded. Several
importers do not support invalidate_mappings() at all and won't unmap
until userspace triggers it.

This unbounded wait is not suitable for exporters like VFIO and RDMA tha
need to issue revoke as part of their normal operations.

Add dma_buf_attach_revocable() to allow exporters to determine the
difference between importers that can complete the above in bounded time,
and those that can't. It can be called inside the exporter's attach op to
reject incompatible importers.

Document these details about how dma_buf_invalidate_mappings() works and
what the required sequence is to achieve a full revocation.

Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20260131-dmabuf-revoke-v7-6-463d956bd527@nvidia.com

authored by

Leon Romanovsky and committed by
Christian König
be6d4c9e 575157b1

+50 -7
+47 -1
drivers/dma-buf/dma-buf.c
··· 1319 1319 EXPORT_SYMBOL_NS_GPL(dma_buf_unmap_attachment_unlocked, "DMA_BUF"); 1320 1320 1321 1321 /** 1322 + * dma_buf_attach_revocable - check if a DMA-buf importer implements 1323 + * revoke semantics. 1324 + * @attach: the DMA-buf attachment to check 1325 + * 1326 + * Returns true if the DMA-buf importer can support the revoke sequence 1327 + * explained in dma_buf_invalidate_mappings() within bounded time. Meaning the 1328 + * importer implements invalidate_mappings() and ensures that unmap is called as 1329 + * a result. 1330 + */ 1331 + bool dma_buf_attach_revocable(struct dma_buf_attachment *attach) 1332 + { 1333 + return attach->importer_ops && 1334 + attach->importer_ops->invalidate_mappings; 1335 + } 1336 + EXPORT_SYMBOL_NS_GPL(dma_buf_attach_revocable, "DMA_BUF"); 1337 + 1338 + /** 1322 1339 * dma_buf_invalidate_mappings - notify attachments that DMA-buf is moving 1323 1340 * 1324 1341 * @dmabuf: [in] buffer which is moving 1325 1342 * 1326 1343 * Informs all attachments that they need to destroy and recreate all their 1327 - * mappings. 1344 + * mappings. If the attachment is dynamic then the dynamic importer is expected 1345 + * to invalidate any caches it has of the mapping result and perform a new 1346 + * mapping request before allowing HW to do any further DMA. 1347 + * 1348 + * If the attachment is pinned then this informs the pinned importer that the 1349 + * underlying mapping is no longer available. Pinned importers may take this is 1350 + * as a permanent revocation and never establish new mappings so exporters 1351 + * should not trigger it lightly. 1352 + * 1353 + * Upon return importers may continue to access the DMA-buf memory. The caller 1354 + * must do two additional waits to ensure that the memory is no longer being 1355 + * accessed: 1356 + * 1) Until dma_resv_wait_timeout() retires fences the importer is allowed to 1357 + * fully access the memory. 1358 + * 2) Until the importer calls unmap it is allowed to speculatively 1359 + * read-and-discard the memory. It must not write to the memory. 1360 + * 1361 + * A caller wishing to use dma_buf_invalidate_mappings() to fully stop access to 1362 + * the DMA-buf must wait for both. Dynamic callers can often use just the first. 1363 + * 1364 + * All importers providing a invalidate_mappings() op must ensure that unmap is 1365 + * called within bounded time after the op. 1366 + * 1367 + * Pinned importers that do not support a invalidate_mappings() op will 1368 + * eventually perform unmap when they are done with the buffer, which may be an 1369 + * ubounded time from calling this function. dma_buf_attach_revocable() can be 1370 + * used to prevent such importers from attaching. 1371 + * 1372 + * Importers are free to request a new mapping in parallel as this function 1373 + * returns. 1328 1374 */ 1329 1375 void dma_buf_invalidate_mappings(struct dma_buf *dmabuf) 1330 1376 {
+3 -6
include/linux/dma-buf.h
··· 456 456 * called with this lock held as well. This makes sure that no mapping 457 457 * is created concurrently with an ongoing move operation. 458 458 * 459 - * Mappings stay valid and are not directly affected by this callback. 460 - * But the DMA-buf can now be in a different physical location, so all 461 - * mappings should be destroyed and re-created as soon as possible. 462 - * 463 - * New mappings can be created after this callback returns, and will 464 - * point to the new location of the DMA-buf. 459 + * See the kdoc for dma_buf_invalidate_mappings() for details on the 460 + * required behavior. 465 461 */ 466 462 void (*invalidate_mappings)(struct dma_buf_attachment *attach); 467 463 }; ··· 575 579 void dma_buf_unmap_attachment(struct dma_buf_attachment *, struct sg_table *, 576 580 enum dma_data_direction); 577 581 void dma_buf_invalidate_mappings(struct dma_buf *dma_buf); 582 + bool dma_buf_attach_revocable(struct dma_buf_attachment *attach); 578 583 int dma_buf_begin_cpu_access(struct dma_buf *dma_buf, 579 584 enum dma_data_direction dir); 580 585 int dma_buf_end_cpu_access(struct dma_buf *dma_buf,