Pull irq domain updates from Thomas Gleixner:
"The real interesting irq updates:
- Support for hierarchical irq domains:
For complex interrupt routing scenarios where more than one
interrupt related chip is involved we had no proper representation
in the generic interrupt infrastructure so far. That made people
implement rather ugly constructs in their nested irq chip
implementations. The main offenders are x86 and arm/gic.
To distangle that mess we have now hierarchical irqdomains which
seperate the various interrupt chips and connect them via the
hierarchical domains. That keeps the domain specific details
internal to the particular hierarchy level and removes the
criss/cross referencing of chip internals. The resulting hierarchy
for a complex x86 system will look like this:
vector mapped: 74
msi-0 mapped: 2
dmar-ir-1 mapped: 69
ioapic-1 mapped: 4
ioapic-0 mapped: 20
pci-msi-2 mapped: 45
dmar-ir-0 mapped: 3
ioapic-2 mapped: 1
pci-msi-1 mapped: 2
htirq mapped: 0
Neither ioapic nor pci-msi know about the dmar interrupt remapping
between themself and the vector domain. If interrupt remapping is
disabled ioapic and pci-msi become direct childs of the vector
domain.
In hindsight we should have done that years ago, but in hindsight
we always know better :)
- Support for generic MSI interrupt domain handling
We have more and more non PCI related MSI interrupts, so providing
a generic infrastructure for this is better than having all
affected architectures implementing their own private hacks.
- Support for PCI-MSI interrupt domain handling, based on the generic
MSI support.
This part carries the pci/msi branch from Bjorn Helgaas pci tree to
avoid a massive conflict. The PCI/MSI parts are acked by Bjorn.
I have two more branches on top of this. The full conversion of x86
to hierarchical domains and a partial conversion of arm/gic"
* 'irq-irqdomain-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (41 commits)
genirq: Move irq_chip_write_msi_msg() helper to core
PCI/MSI: Allow an msi_controller to be associated to an irq domain
PCI/MSI: Provide mechanism to alloc/free MSI/MSIX interrupt from irqdomain
PCI/MSI: Enhance core to support hierarchy irqdomain
PCI/MSI: Move cached entry functions to irq core
genirq: Provide default callbacks for msi_domain_ops
genirq: Introduce msi_domain_alloc/free_irqs()
asm-generic: Add msi.h
genirq: Add generic msi irq domain support
genirq: Introduce callback irq_chip.irq_write_msi_msg
genirq: Work around __irq_set_handler vs stacked domains ordering issues
irqdomain: Introduce helper function irq_domain_add_hierarchy()
irqdomain: Implement a method to automatically call parent domains alloc/free
genirq: Introduce helper irq_domain_set_info() to reduce duplicated code
genirq: Split out flow handler typedefs into seperate header file
genirq: Add IRQ_SET_MASK_OK_DONE to support stacked irqchip
genirq: Introduce irq_chip.irq_compose_msi_msg() to support stacked irqchip
genirq: Add more helper functions to support stacked irq_chip
genirq: Introduce helper functions to support stacked irq_chip
irqdomain: Do irq_find_mapping and set_type for hierarchy irqdomain in case OF
...
The memory controller on NVIDIA Tegra exposes various knobs that can be
used to tune the behaviour of the clients attached to it.
Currently this driver sets up the latency allowance registers to the HW
defaults. Eventually an API should be exported by this driver (via a
custom API or a generic subsystem) to allow clients to register latency
requirements.
This driver also registers an IOMMU (SMMU) that's implemented by the
memory controller. It is supported on Tegra30, Tegra114 and Tegra124
currently. Tegra20 has a GART instead.
The Tegra SMMU operates on memory clients and SWGROUPs. A memory client
is a unidirectional, special-purpose DMA master. A SWGROUP represents a
set of memory clients that form a logical functional unit corresponding
to a single device. Typically a device has two clients: one client for
read transactions and one client for write transactions, but there are
also devices that have only read clients, but many of them (such as the
display controllers).
Because there is no 1:1 relationship between memory clients and devices
the driver keeps a table of memory clients and the SWGROUPs that they
belong to per SoC. Note that this is an exception and due to the fact
that the SMMU is tightly integrated with the rest of the Tegra SoC. The
use of these tables is discouraged in drivers for generic IOMMU devices
such as the ARM SMMU because the same IOMMU could be used in any number
of SoCs and keeping such tables for each SoC would not scale.
Acked-by: Joerg Roedel <jroedel@suse.de>
Signed-off-by: Thierry Reding <treding@nvidia.com>
When some part of bus_set_iommu fails it should undo any made changes
and not simply leave everything as is.
This includes unregistering the bus notifier in iommu_bus_init when
add_iommu_group fails and also setting the bus->iommu_ops back to NULL.
Signed-off-by: Heiko Stuebner <heiko@sntech.de>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The IOMMU-API works on page boundarys, unlike the DMA-API
which can work with sub-page buffers. The sg->offset
field does not make sense on the IOMMU level, so force it to
be 0. Do some error-path consolidation while at it.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Mapping and unmapping are more often than not in the critical path.
map_sg allows IOMMU driver implementations to optimize the process
of mapping buffers into the IOMMU page tables.
Instead of mapping a buffer one page at a time and requiring potentially
expensive TLB operations for each page, this function allows the driver
to map all pages in one go and defer TLB maintenance until after all
pages have been mapped.
Additionally, the mapping operation would be faster in general since
clients does not have to keep calling map API over and over again for
each physically contiguous chunk of memory that needs to be mapped to a
virtually contiguous region.
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This pull-request includes:
* Change in the IOMMU-API to convert the former iommu_domain_capable
function to just iommu_capable
* Various fixes in handling RMRR ranges for the VT-d driver (one fix
requires a device driver core change which was acked
by Greg KH)
* The AMD IOMMU driver now assigns and deassigns complete alias groups
to fix issues with devices using the wrong PCI request-id
* MMU-401 support for the ARM SMMU driver
* Multi-master IOMMU group support for the ARM SMMU driver
* Various other small fixes all over the place
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJUPNxYAAoJECvwRC2XARrjMwMP/RLSr+oA31rGVjLXcmcCHl7Q
Uj7xpcnG19qB0aqNR1JeJuZNkK/tw44pE353MQPbz4N9UVUiogklGIVD1iJvFV53
0qm84bvpDJIof4aP35B3H3Umft2USTn/lmsQg/RklQcNTW8DzNj63b8BTNR7k/GL
G7bLg7F1BUCl0shZCCsFspOIulQPAJYN2OvHlfYBav/bfDvfouQ3lrV+loGrK44r
F2Hmp+imXlIhUCjfbiWz6wKFxvPrxZx482vm2pXBCSnXEdW4/fz6nf9VHUK/Cfsq
JAimY1CfiDo1aqH9/yVHUOw5SD/NYOXq6E5bFPg/WENbipbbae5cK2u6PX5MMBAn
CG4BM8l9xicfGPqgn5YFSRY/6qC6K7NlxMnt9U8l18QIkDVDqEtUgJQISJuce7wx
FWx6eSWaxpIe5yhq19/h2ELalUUyR/fPq+UXXjYDL1kLV/vcvC/lC3mbNAQU93zU
WK0bG2tDg88JHavc25Ewa2aOn4BVM2BpwuLbYlgQReaEmsQRnEPgtmRNyLJHqbFE
wwpCj8pBWdufsJWRyvpnXQ+CfA7oSz4e7hz1G+0/5uiDmagfvg16Ql5JtPmmuLUm
Kc3dVIiG0s1ewohZIIJETGCqprQbCSqs8CCQqB6p2zDBWFKpNT7F38lm/KlehkCz
JpAiI7Y2K9Jejp0VIPrt
=OMOt
-----END PGP SIGNATURE-----
Merge tag 'iommu-updates-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
"This pull-request includes:
- change in the IOMMU-API to convert the former iommu_domain_capable
function to just iommu_capable
- various fixes in handling RMRR ranges for the VT-d driver (one fix
requires a device driver core change which was acked by Greg KH)
- the AMD IOMMU driver now assigns and deassigns complete alias
groups to fix issues with devices using the wrong PCI request-id
- MMU-401 support for the ARM SMMU driver
- multi-master IOMMU group support for the ARM SMMU driver
- various other small fixes all over the place"
* tag 'iommu-updates-v3.18' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (41 commits)
iommu/vt-d: Work around broken RMRR firmware entries
iommu/vt-d: Store bus information in RMRR PCI device path
iommu/vt-d: Only remove domain when device is removed
driver core: Add BUS_NOTIFY_REMOVED_DEVICE event
iommu/amd: Fix devid mapping for ivrs_ioapic override
iommu/irq_remapping: Fix the regression of hpet irq remapping
iommu: Fix bus notifier breakage
iommu/amd: Split init_iommu_group() from iommu_init_device()
iommu: Rework iommu_group_get_for_pci_dev()
iommu: Make of_device_id array const
amd_iommu: do not dereference a NULL pointer address.
iommu/omap: Remove omap_iommu unused owner field
iommu: Remove iommu_domain_has_cap() API function
IB/usnic: Convert to use new iommu_capable() API function
vfio: Convert to use new iommu_capable() API function
kvm: iommu: Convert to use new iommu_capable() API function
iommu/tegra: Convert to iommu_capable() API function
iommu/msm: Convert to iommu_capable() API function
iommu/vt-d: Convert to iommu_capable() API function
iommu/fsl: Convert to iommu_capable() API function
...
Apart from the usual cleanups, here is the summary of new features:
- s390 moves closer towards host large page support
- PowerPC has improved support for debugging (both inside the guest and
via gdbstub) and support for e6500 processors
- ARM/ARM64 support read-only memory (which is necessary to put firmware
in emulated NOR flash)
- x86 has the usual emulator fixes and nested virtualization improvements
(including improved Windows support on Intel and Jailhouse hypervisor
support on AMD), adaptive PLE which helps overcommitting of huge guests.
Also included are some patches that make KVM more friendly to memory
hot-unplug, and fixes for rare caching bugs.
Two patches have trivial mm/ parts that were acked by Rik and Andrew.
Note: I will soon switch to a subkey for signing purposes. To verify
future signed pull requests from me, please update my key with
"gpg --recv-keys 9B4D86F2". You should see 3 new subkeys---the
one for signing will be a 2048-bit RSA key, 4E6B09D7.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)
iQIcBAABAgAGBQJUL5sPAAoJEBvWZb6bTYbyfkEP/3MNhSyn6HCjPjtjLNPAl9KL
WpExZSUFL2+4CztpdGIsek1BeJYHmqv3+c5S+WvaWVA1aqh2R7FT1D1ErBLjgLQq
lq23IOr+XxmC3dXQUEEk+TlD+283UzypzEG4l4UD3JYg79fE3UrXAz82SeyewJDY
x7aPYhkZG3RHu+wAyMPasG6E3zS5LySdUtGWbiPwz5BejrhBJoJdeb2WIL/RwnUK
7ppSLB5EoFj/uMkuyeAAdAbdfSrhHA6faDZxNdxS9k9wGutrhhfUoQ49ONrKG4dV
sFo1tSPTVgRs8QFYUZ2fJUPBAmUVddsgqh2K9d0NftGTq7b8YszaCsfFrs2/Y4MU
YxssWEhxsfszerCu12bbAJrv6JBZYQ7TwGvI9L7P0iFU6IVw/djmukU4AkM9/e91
YS/cue/PN+9Pn2ccXzL9J7xRtZb8FsOuRsCXTCmbOwDkLmrKPDBN2t3RUbeF+Eam
ABrpWnLKX13kZSo4LKU+/niarzmPMp7odQfHVdr8ea0fiYLp4iN8puA20WaSPIgd
CLvm+RAvXe5Lm91L4mpFotJ2uFyK6QlIYJV4FsgeWv/0D0qppWQi0Utb/aCNHCgy
z8MyUMD48y7EpoQrFYr/7cddXIu0/NegnM8I1coVjIPEk4NfeebGUlCJ/V3D8wMG
BgEfS2x6jRc5zB3hjwDr
=iEVi
-----END PGP SIGNATURE-----
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM updates from Paolo Bonzini:
"Fixes and features for 3.18.
Apart from the usual cleanups, here is the summary of new features:
- s390 moves closer towards host large page support
- PowerPC has improved support for debugging (both inside the guest
and via gdbstub) and support for e6500 processors
- ARM/ARM64 support read-only memory (which is necessary to put
firmware in emulated NOR flash)
- x86 has the usual emulator fixes and nested virtualization
improvements (including improved Windows support on Intel and
Jailhouse hypervisor support on AMD), adaptive PLE which helps
overcommitting of huge guests. Also included are some patches that
make KVM more friendly to memory hot-unplug, and fixes for rare
caching bugs.
Two patches have trivial mm/ parts that were acked by Rik and Andrew.
Note: I will soon switch to a subkey for signing purposes"
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (157 commits)
kvm: do not handle APIC access page if in-kernel irqchip is not in use
KVM: s390: count vcpu wakeups in stat.halt_wakeup
KVM: s390/facilities: allow TOD-CLOCK steering facility bit
KVM: PPC: BOOK3S: HV: CMA: Reserve cma region only in hypervisor mode
arm/arm64: KVM: Report correct FSC for unsupported fault types
arm/arm64: KVM: Fix VTTBR_BADDR_MASK and pgd alloc
kvm: Fix kvm_get_page_retry_io __gup retval check
arm/arm64: KVM: Fix set_clear_sgi_pend_reg offset
kvm: x86: Unpin and remove kvm_arch->apic_access_page
kvm: vmx: Implement set_apic_access_page_addr
kvm: x86: Add request bit to reload APIC access page address
kvm: Add arch specific mmu notifier for page invalidation
kvm: Rename make_all_cpus_request() to kvm_make_all_cpus_request() and make it non-static
kvm: Fix page ageing bugs
kvm/x86/mmu: Pass gfn and level to rmapp callback.
x86: kvm: use alternatives for VMCALL vs. VMMCALL if kernel text is read-only
kvm: x86: use macros to compute bank MSRs
KVM: x86: Remove debug assertion of non-PAE reserved bits
kvm: don't take vcpu mutex for obviously invalid vcpu ioctls
kvm: Faults which trigger IO release the mmap_sem
...
The VT-d specification states that an RMRR entry in the DMAR
table needs to specify the full path to the device. This is
also how newer Linux kernels implement it.
Unfortunatly older drivers just match for the target device
and not the full path to the device, so that BIOS vendors
implement that behavior into their BIOSes to make them work
with older Linux kernels. But those RMRR entries break on
newer Linux kernels.
Work around this issue by adding a fall-back into the RMRR
matching code to match those old RMRR entries too.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This makes sure any RMRR mappings stay in place when the
driver is unbound from the device.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Tested-by: Jerry Hoemann <jerry.hoemann@hp.com>
When the device id for an IOAPIC is overridden on the kernel
command line, the iommu driver has to make sure it sets up a
DTE for this device id.
Reported-by: Su Friendy <friendy.su@sony.com.cn>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Commit 71054d8841 ("x86, hpet: Introduce x86_msi_ops.setup_hpet_msi")
introduced x86_msi_ops.setup_hpet_msi to setup hpet MSI irq
when irq remapping enabled. This caused a regression of
hpet MSI irq remapping.
Original code flow before commit 71054d8841:
hpet_setup_msi_irq()
arch_setup_hpet_msi()
setup_hpet_msi_remapped()
remap_ops->setup_hpet_msi()
alloc_irte()
msi_compose_msg()
hpet_msi_write()
...
Current code flow after commit 71054d8841:
hpet_setup_msi_irq()
x86_msi.setup_hpet_msi()
setup_hpet_msi_remapped()
intel_setup_hpet_msi()
alloc_irte()
Currently, we only call alloc_irte() for hpet MSI, but
do not composed and wrote its msg...
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
iommu_bus_init() registers a bus notifier on the given bus by using
a statically defined notifier block:
static struct notifier_block iommu_bus_nb = {
.notifier_call = iommu_bus_notifier,
};
This same notifier block is used for all busses. This causes a
problem for notifiers registered after iommu has registered this
callback on multiple busses. The problem is that a subsequent
notifier being registered on a bus which has this iommu notifier
will also get linked in to the notifier list of all other busses
which have this iommu notifier.
This patch fixes this by allocating the notifier_block at runtime.
Some error checking is also added to catch any allocation failure
or notifier registration error.
Signed-off-by: Mark Salter <msalter@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
For a PCI device, aliases from the IVRS table won't be populated
into dma_alias_devfn until after iommu_init_device() is called on
each device. We therefore want to split init_iommu_group() to
be called from a separate loop immediately following.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: stable@vger.kernel.org # 3.17
Signed-off-by: Joerg Roedel <jroedel@suse.de>
It turns out that our assumption that aliases are always to the same
slot isn't true. One particular platform reports an IVRS alias of the
SATA controller (00:11.0) for the legacy IDE controller (00:14.1).
When we hit this, we attempt to use a single IOMMU group for
everything on the same bus, which in this case is the root complex.
We already have multiple groups defined for the root complex by this
point, resulting in multiple WARN_ON hits.
This patch makes these sorts of aliases work again with IOMMU groups
by reworking how we search through the PCI address space to find
existing groups. This should also now handle looped dependencies and
all sorts of crazy inter-dependencies that we'll likely never see.
The recursion used here should never be very deep. It's unlikely to
have individual aliases and only theoretical that we'd ever see a
chain where one alias causes us to search through to yet another
alias. We're also only dealing with PCIe device on a single bus,
which means we'll typically only see multiple slots in use on the root
complex. Loops are also a theoretically possibility, which I've
tested using fake DMA alias quirks and prevent from causing problems
using a bitmap of the devfn space that's been visited.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: stable@vger.kernel.org # 3.17
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Make of_device_id array const, because all OF functions handle it as const.
Signed-off-by: Kiran Padwal <kiran.padwal@smartplayin.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
under low memory conditions, alloc_pte() may return a NULL pointer.
iommu_map_page() does not check it and will panic the system.
Signed-off-by: Maurizio Lombardi <mlombard@redhat.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
The owner field is never set. Remove it.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
This function will replace the current iommu_domain_has_cap
function and clean up the interface while at it.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
1. We were calling clear_flush_young_notify in unmap_one, but we are
within an mmu notifier invalidate range scope. The spte exists no more
(due to range_start) and the accessed bit info has already been
propagated (due to kvm_pfn_set_accessed). Simply call
clear_flush_young.
2. We clear_flush_young on a primary MMU PMD, but this may be mapped
as a collection of PTEs by the secondary MMU (e.g. during log-dirty).
This required expanding the interface of the clear_flush_young mmu
notifier, so a lot of code has been trivially touched.
3. In the absence of shadow_accessed_mask (e.g. EPT A bit), we emulate
the access bit by blowing the spte. This requires proper synchronizing
with MMU notifier consumers, like every other removal of spte's does.
Signed-off-by: Andres Lagar-Cavilla <andreslc@google.com>
Acked-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We are using the same pfn for every pte we create while constructing the
pmd. Fix this by actually updating the pfn on each iteration of the pmd
construction loop.
It's not clear if we can actually hit this bug right now since iommu_map
splits up the calls to .map based on the page size, so we only ever seem to
iterate this loop once. However, things might change in the future that
might cause us to hit this.
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
MMU-401 is similar to MMU-400, but updated with limited ARMv8 support.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The SMMU driver was relying on a quirk of MMU-500 r2px to identify
the correct architecture version. Since this does not apply to other
implementations, make the architecture version for each supported
implementation explicit.
While we're at it, remove the unnecessary #ifdef since the dependencies
for CONFIG_ARM_SMMU already imply CONFIG_OF.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
In order for nested translation to work correctly, we need to ensure
that the maximum output address size from stage-1 is <= the maximum
supported input address size to stage-2. The latter is currently defined
by VA_BITS, since we make use of the CPU page table functions for
allocating out tables and so the driver currently enforces this
restriction by truncating the stage-1 output size during probe.
In reality, this doesn't make a lot of sense; the guest OS is responsible
for managing the stage-1 page tables, so we actually just need to ensure
that the ID registers of the virtual SMMU interface only advertise the
supported stage-2 input size.
This patch fixes the problem by treating the stage-1 and stage-2 input
address sizes separately.
Reported-by: Tirumalesh Chalamarla <tchalamarla@cavium.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Arbitrary integer division is not available in all ARM CPUs, so the GCC
may spit out calls to helper functions which are not implemented in
the kernel.
This patch avoids these problems in the SMMU driver by using page shift
instead of page size, so that divisions by the page size (as required
by the vSMMU code) can be expressed as a simple right shift.
Signed-off-by: Will Deacon <will.deacon@arm.com>
In preparation for nested translation support, stick a pointer to the
iommu_domain in dev->archdata.iommu. This makes it much easier to grab
hold of the physical group configuration (e.g. cbndx) when dealing with
vSMMU accesses from a guest.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Whilst the driver currently creates one IOMMU group per device, this
will soon change when we start supporting non-transparent PCI bridges
which require all upstream masters to be assigned to the same address
space.
This patch reworks our IOMMU group code so that we can easily support
multi-master groups. The master configuration (streamids and smrs) is
stored as private iommudata on the group, whilst the low-level attach/detach
code is updated to avoid double alloc/free when dealing with multiple
masters sharing the same SMMU configuration. This unifies device
handling, regardless of whether the device sits on the platform or pci
bus.
Signed-off-by: Will Deacon <will.deacon@arm.com>
When debugging and testing code on an SMMU that supports nested
translation, it can be useful to restrict the driver to a particular
stage of translation.
This patch adds a module parameter to the ARM SMMU driver to allow this
by restricting the ability of the probe() code to detect support for
only the specified stage.
Signed-off-by: Will Deacon <will.deacon@arm.com>
A device is tied to an iommu through its archdata field. The archdata
is allocated on the fly for DT-based devices automatically through the
.add_device iommu ops. The current logic incorrectly assigned the name
of the IOMMU user device, instead of the name of the IOMMU device as
required by the attach logic. Fix this issue so that DT-based devices
can attach successfully to an IOMMU domain.
Signed-off-by: Suman Anna <s-anna@ti.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Any device requiring to be attached to an iommu_domain must have
valid archdata containing the necessary iommu information, which
is SoC-specific. Add a check in the omap_iommu_attach_dev to make
sure that the device has valid archdata before accessing
different SoC-specific fields of the archdata. This prevents a
NULL pointer dereference on any misconfigured devices.
Signed-off-by: Suman Anna <s-anna@ti.com>
Acked-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Working out the usable address sizes for the SMMU is surprisingly tricky.
We must take into account both the limitations of the hardware for VA,
IPA and PA sizes but also any restrictions imposed by the Linux page
table code, particularly when dealing with nested translation (where the
IPA size is limited by the input address size at stage-2).
This patch fixes a few corner cases in our address size handling so that
we correctly deal with 40-bit addresses in TTBCR2 and restrict the IPA
size differently depending on whether or not we have support for nested
translation.
Signed-off-by: Will Deacon <will.deacon@arm.com>
The prefix suggests the number should be printed in hex, so use
the %x specifier to do that.
Found by using regex suggested by Joe Perches.
Signed-off-by: Hans Wennborg <hans@hanshq.net>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The number of S2CR registers is not properly set when stream
matching is not supported. Fix this and add check that we do not try to
access outside of the number of S2CR regisrers.
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
[will: added missing NUMSIDB_* definitions]
Signed-off-by: Will Deacon <will.deacon@arm.com>
When we attach a device to a domain, we configure the SMRs (if we have
any) to match the Stream IDs for the corresponding SMMU master and
program the s2crs accordingly. However, on detach we tear down the s2crs
assuming stream-indexing (as opposed to stream-matching) and SMRs
assuming they are present.
This patch fixes the device detach code so that it operates as a
converse of the attach code.
Cc: <stable@vger.kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
If split page table lock for PTE tables is enabled (CONFIG_SPLIT_PTLOCK_CPUS
<=NR_CPUS) pgtable_page_ctor() leads to non-atomic allocation for ptlock with
a spinlock held, resulting in:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 466 at kernel/locking/lockdep.c:2742 lockdep_trace_alloc+0xd8/0xf4()
DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags))
Modules linked in:
CPU: 0 PID: 466 Comm: dma0chan0-copy0 Not tainted 3.16.0-3d47efb-clean-pl330-dma_test-ve-a15-a32-slr-m
c-on-3+ #55
[<80014748>] (unwind_backtrace) from [<80011640>] (show_stack+0x10/0x14)
[<80011640>] (show_stack) from [<802bf864>] (dump_stack+0x80/0xb4)
[<802bf864>] (dump_stack) from [<8002385c>] (warn_slowpath_common+0x64/0x88)
[<8002385c>] (warn_slowpath_common) from [<80023914>] (warn_slowpath_fmt+0x30/0x40)
[<80023914>] (warn_slowpath_fmt) from [<8005d818>] (lockdep_trace_alloc+0xd8/0xf4)
[<8005d818>] (lockdep_trace_alloc) from [<800d3d78>] (kmem_cache_alloc+0x24/0x144)
[<800d3d78>] (kmem_cache_alloc) from [<800bfae4>] (ptlock_alloc+0x18/0x2c)
[<800bfae4>] (ptlock_alloc) from [<802b1ec0>] (arm_smmu_handle_mapping+0x4c0/0x690)
[<802b1ec0>] (arm_smmu_handle_mapping) from [<802b0cd8>] (iommu_map+0xe0/0x148)
[<802b0cd8>] (iommu_map) from [<80019098>] (arm_coherent_iommu_map_page+0x160/0x278)
[<80019098>] (arm_coherent_iommu_map_page) from [<801f4d78>] (dmatest_func+0x60c/0x1098)
[<801f4d78>] (dmatest_func) from [<8003f8ac>] (kthread+0xcc/0xe8)
[<8003f8ac>] (kthread) from [<8000e868>] (ret_from_fork+0x14/0x2c)
---[ end trace ce0d27e6f434acf8 ]--
Split page tables lock is not used in the driver. In fact, page tables are
guarded with domain lock, so remove calls to pgtable_page_{c,d}tor().
Cc: <stable@vger.kernel.org>
Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Stage-1 context banks do not have the SMMU_CBn_TCR[SL0] field since it
is only applicable to stage-2 context banks.
This patch ensures that we don't set the reserved TCR bits for stage-1
translations.
Cc: <stable@vger.kernel.org>
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
request_irq shouldn't be called from atomic context since it might
sleep, but we're calling it with a spinlock held, resulting in:
[ 9.172202] BUG: sleeping function called from invalid context at kernel/mm/slub.c:926
[ 9.182989] in_atomic(): 1, irqs_disabled(): 128, pid: 1, name: swapper/0
[ 9.189762] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G W 3.10.40-gbc1b510b-38437-g55831d3bd9-dirty #97
[ 9.199757] [<c020c448>] (unwind_backtrace+0x0/0x11c) from [<c02097d0>] (show_stack+0x10/0x14)
[ 9.208346] [<c02097d0>] (show_stack+0x10/0x14) from [<c0301d74>] (kmem_cache_alloc_trace+0x3c/0x210)
[ 9.217543] [<c0301d74>] (kmem_cache_alloc_trace+0x3c/0x210) from [<c0276a48>] (request_threaded_irq+0x88/0x11c)
[ 9.227702] [<c0276a48>] (request_threaded_irq+0x88/0x11c) from [<c0931ca4>] (arm_smmu_attach_dev+0x188/0x858)
[ 9.237686] [<c0931ca4>] (arm_smmu_attach_dev+0x188/0x858) from [<c0212cd8>] (arm_iommu_attach_device+0x18/0xd0)
[ 9.247837] [<c0212cd8>] (arm_iommu_attach_device+0x18/0xd0) from [<c093314c>] (arm_smmu_test_probe+0x68/0xd4)
[ 9.257823] [<c093314c>] (arm_smmu_test_probe+0x68/0xd4) from [<c05aadd0>] (driver_probe_device+0x12c/0x330)
[ 9.267629] [<c05aadd0>] (driver_probe_device+0x12c/0x330) from [<c05ab080>] (__driver_attach+0x68/0x8c)
[ 9.277090] [<c05ab080>] (__driver_attach+0x68/0x8c) from [<c05a92d4>] (bus_for_each_dev+0x70/0x84)
[ 9.286118] [<c05a92d4>] (bus_for_each_dev+0x70/0x84) from [<c05aa3b0>] (bus_add_driver+0x100/0x244)
[ 9.295233] [<c05aa3b0>] (bus_add_driver+0x100/0x244) from [<c05ab5d0>] (driver_register+0x9c/0x124)
[ 9.304347] [<c05ab5d0>] (driver_register+0x9c/0x124) from [<c0933088>] (arm_smmu_test_init+0x14/0x38)
[ 9.313635] [<c0933088>] (arm_smmu_test_init+0x14/0x38) from [<c0200618>] (do_one_initcall+0xb8/0x160)
[ 9.322926] [<c0200618>] (do_one_initcall+0xb8/0x160) from [<c1200b7c>] (kernel_init_freeable+0x108/0x1cc)
[ 9.332564] [<c1200b7c>] (kernel_init_freeable+0x108/0x1cc) from [<c0b924b0>] (kernel_init+0xc/0xe4)
[ 9.341675] [<c0b924b0>] (kernel_init+0xc/0xe4) from [<c0205e38>] (ret_from_fork+0x14/0x3c)
Fix this by moving the request_irq out of the critical section. This
should be okay since smmu_domain->smmu is still being protected by the
critical section. Also, we still don't program the Stream Match Register
until after registering our interrupt handler so we shouldn't be missing
any interrupts.
Cc: <stable@vger.kernel.org>
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
[will: code cleanup and fixed request_irq token parameter]
Signed-off-by: Will Deacon <will.deacon@arm.com>
This reference count is not used anymore, as all devices in
an alias group are now attached and detached together.
Signed-off-by: Joerg Roedel <jroedel@suse.de>