linux_dsm_epyc7002/drivers/iommu/Kconfig

356 lines
10 KiB
Plaintext
Raw Normal View History

# IOMMU_API always gets selected by whoever wants it.
config IOMMU_API
bool
menuconfig IOMMU_SUPPORT
bool "IOMMU Hardware Support"
depends on MMU
default y
---help---
Say Y here if you want to compile device drivers for IO Memory
Management Units into the kernel. These devices usually allow to
remap DMA requests and/or remap interrupts from other devices on the
system.
if IOMMU_SUPPORT
menu "Generic IOMMU Pagetable Support"
# Selected by the actual pagetable implementations
config IOMMU_IO_PGTABLE
bool
config IOMMU_IO_PGTABLE_LPAE
bool "ARMv7/v8 Long Descriptor Format"
select IOMMU_IO_PGTABLE
depends on ARM || ARM64 || COMPILE_TEST
help
Enable support for the ARM long descriptor pagetable format.
This allocator supports 4K/2M/1G, 16K/32M and 64K/512M page
sizes at both stage-1 and stage-2, as well as address spaces
up to 48-bits in size.
config IOMMU_IO_PGTABLE_LPAE_SELFTEST
bool "LPAE selftests"
depends on IOMMU_IO_PGTABLE_LPAE
help
Enable self-tests for LPAE page table allocator. This performs
a series of page-table consistency checks during boot.
If unsure, say N here.
endmenu
config IOMMU_IOVA
bool
config OF_IOMMU
def_bool y
depends on OF && IOMMU_API
iommu/fsl: Freescale PAMU driver and iommu implementation. Following is a brief description of the PAMU hardware: PAMU determines what action to take and whether to authorize the action on the basis of the memory address, a Logical IO Device Number (LIODN), and PAACT table (logically) indexed by LIODN and address. Hardware devices which need to access memory must provide an LIODN in addition to the memory address. Peripheral Access Authorization and Control Tables (PAACTs) are the primary data structures used by PAMU. A PAACT is a table of peripheral access authorization and control entries (PAACE).Each PAACE defines the range of I/O bus address space that is accessible by the LIOD and the associated access capabilities. There are two types of PAACTs: primary PAACT (PPAACT) and secondary PAACT (SPAACT).A given physical I/O device may be able to act as one or more independent logical I/O devices (LIODs). Each such logical I/O device is assigned an identifier called logical I/O device number (LIODN). A LIODN is allocated a contiguous portion of the I/O bus address space called the DSA window for performing DSA operations. The DSA window may optionally be divided into multiple sub-windows, each of which may be used to map to a region in system storage space. The first sub-window is referred to as the primary sub-window and the remaining are called secondary sub-windows. This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c) has been derived from the work done by Ashish Kalra and Timur Tabi. [For iommu group support] Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Timur Tabi <timur@tabi.org> Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com> Signed-off-by: Joerg Roedel <joro@8bytes.org>
2013-07-15 11:50:57 +07:00
config FSL_PAMU
bool "Freescale IOMMU support"
depends on PPC32
depends on PPC_E500MC || COMPILE_TEST
iommu/fsl: Freescale PAMU driver and iommu implementation. Following is a brief description of the PAMU hardware: PAMU determines what action to take and whether to authorize the action on the basis of the memory address, a Logical IO Device Number (LIODN), and PAACT table (logically) indexed by LIODN and address. Hardware devices which need to access memory must provide an LIODN in addition to the memory address. Peripheral Access Authorization and Control Tables (PAACTs) are the primary data structures used by PAMU. A PAACT is a table of peripheral access authorization and control entries (PAACE).Each PAACE defines the range of I/O bus address space that is accessible by the LIOD and the associated access capabilities. There are two types of PAACTs: primary PAACT (PPAACT) and secondary PAACT (SPAACT).A given physical I/O device may be able to act as one or more independent logical I/O devices (LIODs). Each such logical I/O device is assigned an identifier called logical I/O device number (LIODN). A LIODN is allocated a contiguous portion of the I/O bus address space called the DSA window for performing DSA operations. The DSA window may optionally be divided into multiple sub-windows, each of which may be used to map to a region in system storage space. The first sub-window is referred to as the primary sub-window and the remaining are called secondary sub-windows. This patch provides the PAMU driver (fsl_pamu.c) and the corresponding IOMMU API implementation (fsl_pamu_domain.c). The PAMU hardware driver (fsl_pamu.c) has been derived from the work done by Ashish Kalra and Timur Tabi. [For iommu group support] Acked-by: Alex Williamson <alex.williamson@redhat.com> Signed-off-by: Timur Tabi <timur@tabi.org> Signed-off-by: Varun Sethi <Varun.Sethi@freescale.com> Signed-off-by: Joerg Roedel <joro@8bytes.org>
2013-07-15 11:50:57 +07:00
select IOMMU_API
select GENERIC_ALLOCATOR
help
Freescale PAMU support. PAMU is the IOMMU present on Freescale QorIQ platforms.
PAMU can authorize memory access, remap the memory address, and remap I/O
transaction types.
# MSM IOMMU support
config MSM_IOMMU
bool "MSM IOMMU Support"
depends on ARM
depends on ARCH_MSM8X60 || ARCH_MSM8960 || COMPILE_TEST
depends on BROKEN
select IOMMU_API
help
Support for the IOMMUs found on certain Qualcomm SOCs.
These IOMMUs allow virtualization of the address space used by most
cores within the multimedia subsystem.
If unsure, say N here.
config IOMMU_PGTABLES_L2
def_bool y
depends on MSM_IOMMU && MMU && SMP && CPU_DCACHE_DISABLE=n
# AMD IOMMU support
config AMD_IOMMU
bool "AMD IOMMU support"
select SWIOTLB
select PCI_MSI
select PCI_ATS
select PCI_PRI
select PCI_PASID
select IOMMU_API
x86, build, pci: Fix PCI_MSI build on !SMP Commit ebd97be635 ('PCI: remove ARCH_SUPPORTS_MSI kconfig option') removed the ARCH_SUPPORTS_MSI option which architectures could select to indicate that they support MSI. Now, all architectures are supposed to build fine when MSI support is enabled: instead of having the architecture tell *when* MSI support can be used, it's up to the architecture code to ensure that MSI support can be enabled. On x86, commit ebd97be635 removed the following line: select ARCH_SUPPORTS_MSI if (X86_LOCAL_APIC && X86_IO_APIC) Which meant that MSI support was only available when the local APIC and I/O APIC were enabled. While this is always true on SMP or x86-64, it is not necessarily the case on i386 !SMP. The below patch makes sure that the local APIC and I/O APIC support is always enabled when MSI support is enabled. To do so, it: * Ensures the X86_UP_APIC option is not visible when PCI_MSI is enabled. This is the option that allows, on UP machines, to enable or not the APIC support. It is already not visible on SMP systems, or x86-64 systems, for example. We're simply also making it invisible on i386 MSI systems. * Ensures that the X86_LOCAL_APIC and X86_IO_APIC options are 'y' when PCI_MSI is enabled. Notice that this change requires a change in drivers/iommu/Kconfig to avoid a recursive Kconfig dependencey. The AMD_IOMMU option selects PCI_MSI, but was depending on X86_IO_APIC. This dependency is no longer needed: as soon as PCI_MSI is selected, the presence of X86_IO_APIC is guaranteed. Moreover, the AMD_IOMMU already depended on X86_64, which already guaranteed that X86_IO_APIC was enabled, so this dependency was anyway redundant. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Link: http://lkml.kernel.org/r/1380794354-9079-1-git-send-email-thomas.petazzoni@free-electrons.com Reported-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-10-03 16:59:14 +07:00
depends on X86_64 && PCI && ACPI
---help---
With this option you can enable support for AMD IOMMU hardware in
your system. An IOMMU is a hardware component which provides
remapping of DMA memory accesses from devices. With an AMD IOMMU you
can isolate the DMA memory of different devices and protect the
system from misbehaving device drivers or hardware.
You can find out if your system has an AMD IOMMU if you look into
your BIOS for an option to enable it or if you have an IVRS ACPI
table.
config AMD_IOMMU_STATS
bool "Export AMD IOMMU statistics to debugfs"
depends on AMD_IOMMU
select DEBUG_FS
---help---
This option enables code in the AMD IOMMU driver to collect various
statistics about whats happening in the driver and exports that
information to userspace via debugfs.
If unsure, say N.
config AMD_IOMMU_V2
tristate "AMD IOMMU Version 2 driver"
depends on AMD_IOMMU
select MMU_NOTIFIER
---help---
This option enables support for the AMD IOMMUv2 features of the IOMMU
hardware. Select this option if you want to use devices that support
the PCI PRI and PASID interface.
# Intel IOMMU support
config DMAR_TABLE
bool
config INTEL_IOMMU
bool "Support for Intel IOMMU using DMA Remapping Devices"
depends on PCI_MSI && ACPI && (X86 || IA64_GENERIC)
select IOMMU_API
select IOMMU_IOVA
select DMAR_TABLE
help
DMA remapping (DMAR) devices support enables independent address
translations for Direct Memory Access (DMA) from devices.
These DMA remapping devices are reported via ACPI tables
and include PCI device scope covered by these DMA
remapping devices.
config INTEL_IOMMU_DEFAULT_ON
def_bool y
prompt "Enable Intel DMA Remapping Devices by default"
depends on INTEL_IOMMU
help
Selecting this option will enable a DMAR device at boot time if
one is found. If this option is not selected, DMAR support can
be enabled by passing intel_iommu=on to the kernel.
config INTEL_IOMMU_BROKEN_GFX_WA
bool "Workaround broken graphics drivers (going away soon)"
depends on INTEL_IOMMU && BROKEN && X86
---help---
Current Graphics drivers tend to use physical address
for DMA and avoid using DMA APIs. Setting this config
option permits the IOMMU driver to set a unity map for
all the OS-visible memory. Hence the driver can continue
to use physical addresses for DMA, at least until this
option is removed in the 2.6.32 kernel.
config INTEL_IOMMU_FLOPPY_WA
def_bool y
depends on INTEL_IOMMU && X86
---help---
Floppy disk drivers are known to bypass DMA API calls
thereby failing to work when IOMMU is enabled. This
workaround will setup a 1:1 mapping for the first
16MiB to make floppy (an ISA device) work.
config IRQ_REMAP
bool "Support for Interrupt Remapping"
depends on X86_64 && X86_IO_APIC && PCI_MSI && ACPI
select DMAR_TABLE
---help---
Supports Interrupt remapping for IO-APIC and MSI devices.
To use x2apic mode in the CPU's which support x2APIC enhancements or
to support platforms with CPU's having > 8 bit APIC ID, say Y.
# OMAP IOMMU support
config OMAP_IOMMU
bool "OMAP IOMMU Support"
depends on ARM && MMU
depends on ARCH_OMAP2PLUS || COMPILE_TEST
select IOMMU_API
config OMAP_IOMMU_DEBUG
bool "Export OMAP IOMMU internals in DebugFS"
depends on OMAP_IOMMU && DEBUG_FS
---help---
Select this to see extensive information about
the internal state of OMAP IOMMU in debugfs.
Say N unless you know you need this.
iommu/rockchip: rk3288 iommu driver The rk3288 has several iommus. Each iommu belongs to a single master device. There is one device (ISP) that has two slave iommus, but that case is not yet supported by this driver. At subsys init, the iommu driver registers itself as the iommu driver for the platform bus. The master devices find their slave iommus using the "iommus" field in their devicetree description. Since each slave iommu belongs to exactly one master, their is no additional data needed at probe to associate a slave with its master. An iommu device's power domain, clock and irq are all shared with its master device, and the master device must be careful to attach from the iommu only after powering and clocking it (and leave it powered and clocked before detaching). Because their is no guarantee what the status of the iommu is at probe, and since the driver does not even know if the device is powered, we delay requesting its irq until the master device attaches, at which point we have a guarantee that the device is powered and clocked and we can reset it and disable its interrupt mask. An iommu_domain describes a virtual iova address space. Each iommu_domain has a corresponding page table that lists the mappings from iova to physical address. For the rk3288 iommu, the page table has two levels: The Level 1 "directory_table" has 1024 4-byte dte entries. Each dte points to a level 2 "page_table". Each level 2 page_table has 1024 4-byte pte entries. Each pte points to a 4 KiB page of memory. An iommu_domain is created when a dma_iommu_mapping is created via arm_iommu_create_mapping. Master devices can then attach themselves to this mapping (or attach the mapping to themselves?) by calling arm_iommu_attach_device(). This in turn instructs the iommu driver to write the page table's physical address into the slave iommu's "Directory Table Entry" (DTE) register. In fact multiple master devices, each with their own slave iommu device, can all attach to the same mapping. The iommus for these devices will share the same iommu_domain and therefore point to the same page table. Thus, the iommu domain maintains a list of iommu devices which are attached. This driver relies on the iommu core to ensure that all devices have detached before destroying a domain. v6: - add .add/remove_device() callbacks. - parse platform_device device tree nodes for "iommus" property - store platform device pointer as group iommudata - Check for existence of iommu group instead of relying on a dev_get_drvdata() to return NULL for a NULL device. v7: - fixup some strings. - In rk_iommu_disable_paging() # and % were reversed. Signed-off-by: Daniel Kurtz <djkurtz@chromium.org> Signed-off-by: Simon Xue <xxm@rock-chips.com> Reviewed-by: Grant Grundler <grundler@chromium.org> Reviewed-by: Stéphane Marchesin <marcheu@chromium.org> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2014-11-03 09:53:27 +07:00
config ROCKCHIP_IOMMU
bool "Rockchip IOMMU Support"
depends on ARM
depends on ARCH_ROCKCHIP || COMPILE_TEST
iommu/rockchip: rk3288 iommu driver The rk3288 has several iommus. Each iommu belongs to a single master device. There is one device (ISP) that has two slave iommus, but that case is not yet supported by this driver. At subsys init, the iommu driver registers itself as the iommu driver for the platform bus. The master devices find their slave iommus using the "iommus" field in their devicetree description. Since each slave iommu belongs to exactly one master, their is no additional data needed at probe to associate a slave with its master. An iommu device's power domain, clock and irq are all shared with its master device, and the master device must be careful to attach from the iommu only after powering and clocking it (and leave it powered and clocked before detaching). Because their is no guarantee what the status of the iommu is at probe, and since the driver does not even know if the device is powered, we delay requesting its irq until the master device attaches, at which point we have a guarantee that the device is powered and clocked and we can reset it and disable its interrupt mask. An iommu_domain describes a virtual iova address space. Each iommu_domain has a corresponding page table that lists the mappings from iova to physical address. For the rk3288 iommu, the page table has two levels: The Level 1 "directory_table" has 1024 4-byte dte entries. Each dte points to a level 2 "page_table". Each level 2 page_table has 1024 4-byte pte entries. Each pte points to a 4 KiB page of memory. An iommu_domain is created when a dma_iommu_mapping is created via arm_iommu_create_mapping. Master devices can then attach themselves to this mapping (or attach the mapping to themselves?) by calling arm_iommu_attach_device(). This in turn instructs the iommu driver to write the page table's physical address into the slave iommu's "Directory Table Entry" (DTE) register. In fact multiple master devices, each with their own slave iommu device, can all attach to the same mapping. The iommus for these devices will share the same iommu_domain and therefore point to the same page table. Thus, the iommu domain maintains a list of iommu devices which are attached. This driver relies on the iommu core to ensure that all devices have detached before destroying a domain. v6: - add .add/remove_device() callbacks. - parse platform_device device tree nodes for "iommus" property - store platform device pointer as group iommudata - Check for existence of iommu group instead of relying on a dev_get_drvdata() to return NULL for a NULL device. v7: - fixup some strings. - In rk_iommu_disable_paging() # and % were reversed. Signed-off-by: Daniel Kurtz <djkurtz@chromium.org> Signed-off-by: Simon Xue <xxm@rock-chips.com> Reviewed-by: Grant Grundler <grundler@chromium.org> Reviewed-by: Stéphane Marchesin <marcheu@chromium.org> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2014-11-03 09:53:27 +07:00
select IOMMU_API
select ARM_DMA_USE_IOMMU
help
Support for IOMMUs found on Rockchip rk32xx SOCs.
These IOMMUs allow virtualization of the address space used by most
cores within the multimedia subsystem.
Say Y here if you are using a Rockchip SoC that includes an IOMMU
device.
config TEGRA_IOMMU_GART
bool "Tegra GART IOMMU Support"
depends on ARCH_TEGRA_2x_SOC
select IOMMU_API
help
Enables support for remapping discontiguous physical memory
shared with the operating system into contiguous I/O virtual
space through the GART (Graphics Address Relocation Table)
hardware included on Tegra SoCs.
config TEGRA_IOMMU_SMMU
memory: Add NVIDIA Tegra memory controller support The memory controller on NVIDIA Tegra exposes various knobs that can be used to tune the behaviour of the clients attached to it. Currently this driver sets up the latency allowance registers to the HW defaults. Eventually an API should be exported by this driver (via a custom API or a generic subsystem) to allow clients to register latency requirements. This driver also registers an IOMMU (SMMU) that's implemented by the memory controller. It is supported on Tegra30, Tegra114 and Tegra124 currently. Tegra20 has a GART instead. The Tegra SMMU operates on memory clients and SWGROUPs. A memory client is a unidirectional, special-purpose DMA master. A SWGROUP represents a set of memory clients that form a logical functional unit corresponding to a single device. Typically a device has two clients: one client for read transactions and one client for write transactions, but there are also devices that have only read clients, but many of them (such as the display controllers). Because there is no 1:1 relationship between memory clients and devices the driver keeps a table of memory clients and the SWGROUPs that they belong to per SoC. Note that this is an exception and due to the fact that the SMMU is tightly integrated with the rest of the Tegra SoC. The use of these tables is discouraged in drivers for generic IOMMU devices such as the ARM SMMU because the same IOMMU could be used in any number of SoCs and keeping such tables for each SoC would not scale. Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-04-16 14:24:44 +07:00
bool "NVIDIA Tegra SMMU Support"
depends on ARCH_TEGRA
depends on TEGRA_AHB
depends on TEGRA_MC
select IOMMU_API
help
memory: Add NVIDIA Tegra memory controller support The memory controller on NVIDIA Tegra exposes various knobs that can be used to tune the behaviour of the clients attached to it. Currently this driver sets up the latency allowance registers to the HW defaults. Eventually an API should be exported by this driver (via a custom API or a generic subsystem) to allow clients to register latency requirements. This driver also registers an IOMMU (SMMU) that's implemented by the memory controller. It is supported on Tegra30, Tegra114 and Tegra124 currently. Tegra20 has a GART instead. The Tegra SMMU operates on memory clients and SWGROUPs. A memory client is a unidirectional, special-purpose DMA master. A SWGROUP represents a set of memory clients that form a logical functional unit corresponding to a single device. Typically a device has two clients: one client for read transactions and one client for write transactions, but there are also devices that have only read clients, but many of them (such as the display controllers). Because there is no 1:1 relationship between memory clients and devices the driver keeps a table of memory clients and the SWGROUPs that they belong to per SoC. Note that this is an exception and due to the fact that the SMMU is tightly integrated with the rest of the Tegra SoC. The use of these tables is discouraged in drivers for generic IOMMU devices such as the ARM SMMU because the same IOMMU could be used in any number of SoCs and keeping such tables for each SoC would not scale. Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Thierry Reding <treding@nvidia.com>
2014-04-16 14:24:44 +07:00
This driver supports the IOMMU hardware (SMMU) found on NVIDIA Tegra
SoCs (Tegra30 up to Tegra124).
config EXYNOS_IOMMU
bool "Exynos IOMMU Support"
depends on ARCH_EXYNOS && ARM && MMU
select IOMMU_API
select ARM_DMA_USE_IOMMU
help
Support for the IOMMU (System MMU) of Samsung Exynos application
processor family. This enables H/W multimedia accelerators to see
non-linear physical memory chunks as linear memory in their
address space.
If unsure, say N here.
config EXYNOS_IOMMU_DEBUG
bool "Debugging log for Exynos IOMMU"
depends on EXYNOS_IOMMU
help
Select this to see the detailed log message that shows what
happens in the IOMMU driver.
Say N unless you need kernel log message for IOMMU debugging.
config SHMOBILE_IPMMU
bool
config SHMOBILE_IPMMU_TLB
bool
config SHMOBILE_IOMMU
bool "IOMMU for Renesas IPMMU/IPMMUI"
default n
depends on ARM && MMU
depends on ARCH_SHMOBILE || COMPILE_TEST
select IOMMU_API
select ARM_DMA_USE_IOMMU
select SHMOBILE_IPMMU
select SHMOBILE_IPMMU_TLB
help
Support for Renesas IPMMU/IPMMUI. This option enables
remapping of DMA memory accesses from all of the IP blocks
on the ICB.
Warning: Drivers (including userspace drivers of UIO
devices) of the IP blocks on the ICB *must* use addresses
allocated from the IPMMU (iova) for DMA with this option
enabled.
If unsure, say N.
choice
prompt "IPMMU/IPMMUI address space size"
default SHMOBILE_IOMMU_ADDRSIZE_2048MB
depends on SHMOBILE_IOMMU
help
This option sets IPMMU/IPMMUI address space size by
adjusting the 1st level page table size. The page table size
is calculated as follows:
page table size = number of page table entries * 4 bytes
number of page table entries = address space size / 1 MiB
For example, when the address space size is 2048 MiB, the
1st level page table size is 8192 bytes.
config SHMOBILE_IOMMU_ADDRSIZE_2048MB
bool "2 GiB"
config SHMOBILE_IOMMU_ADDRSIZE_1024MB
bool "1 GiB"
config SHMOBILE_IOMMU_ADDRSIZE_512MB
bool "512 MiB"
config SHMOBILE_IOMMU_ADDRSIZE_256MB
bool "256 MiB"
config SHMOBILE_IOMMU_ADDRSIZE_128MB
bool "128 MiB"
config SHMOBILE_IOMMU_ADDRSIZE_64MB
bool "64 MiB"
config SHMOBILE_IOMMU_ADDRSIZE_32MB
bool "32 MiB"
endchoice
config SHMOBILE_IOMMU_L1SIZE
int
default 8192 if SHMOBILE_IOMMU_ADDRSIZE_2048MB
default 4096 if SHMOBILE_IOMMU_ADDRSIZE_1024MB
default 2048 if SHMOBILE_IOMMU_ADDRSIZE_512MB
default 1024 if SHMOBILE_IOMMU_ADDRSIZE_256MB
default 512 if SHMOBILE_IOMMU_ADDRSIZE_128MB
default 256 if SHMOBILE_IOMMU_ADDRSIZE_64MB
default 128 if SHMOBILE_IOMMU_ADDRSIZE_32MB
config IPMMU_VMSA
bool "Renesas VMSA-compatible IPMMU"
depends on ARM_LPAE
depends on ARCH_SHMOBILE || COMPILE_TEST
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU
help
Support for the Renesas VMSA-compatible IPMMU Renesas found in the
R-Mobile APE6 and R-Car H2/M2 SoCs.
If unsure, say N.
config SPAPR_TCE_IOMMU
bool "sPAPR TCE IOMMU Support"
depends on PPC_POWERNV || PPC_PSERIES
select IOMMU_API
help
Enables bits of IOMMU API required by VFIO. The iommu_ops
is not implemented as it is not necessary for VFIO.
config ARM_SMMU
bool "ARM Ltd. System MMU (SMMU) Support"
depends on (ARM64 || ARM) && MMU
select IOMMU_API
select IOMMU_IO_PGTABLE_LPAE
select ARM_DMA_USE_IOMMU if ARM
help
Support for implementations of the ARM System MMU architecture
versions 1 and 2.
Say Y here if your SoC includes an IOMMU device implementing
the ARM SMMU architecture.
endif # IOMMU_SUPPORT