Commit Graph

149 Commits

Author SHA1 Message Date
David Woodhouse
9051aa0268 intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30 03:53:31 +01:00
David Woodhouse
e1605495c7 intel-iommu: Introduce domain_sg_mapping() to speed up intel_map_sg()
Instead of calling domain_pfn_mapping() repeatedly with single or
small numbers of pages, just pass the sglist in. It can optimise the
number of cache flushes like domain_pfn_mapping() does, and gives a huge
speedup for large scatterlists.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-30 03:51:30 +01:00
David Woodhouse
875764de6f intel-iommu: Simplify __intel_alloc_iova()
There's no need for the separate iommu_alloc_iova() function, and
certainly not for it to be global. Remove the underscores while we're at
it.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:39:53 +01:00
David Woodhouse
6f6a00e40a intel-iommu: Performance improvement for domain_pfn_mapping()
As with dma_pte_clear_range(), don't keep flushing a single PTE at a
time. And also micro-optimise the setting of PTE values rather than
using the helper functions to do all the masking.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:39:45 +01:00
David Woodhouse
310a5ab93c intel-iommu: Performance improvement for dma_pte_clear_range()
It's a bit silly to repeatedly call domain_flush_cache() for each PTE
individually, as we clear it. Instead, batch them up and flush a whole
range at a time. We might as well refrain from recalculating the PTE
address from scratch each time round the loop too.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:39:17 +01:00
David Woodhouse
c5395d5c4a intel-iommu: Clean up iommu_domain_identity_map()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:39:12 +01:00
David Woodhouse
1a4a45516d intel-iommu: Remove last use of PHYSICAL_PAGE_MASK, for reserving PCI BARs
This is fairly broken anyway -- it doesn't take hotplug into account.
We should probably be checking page_is_ram() instead.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:39:05 +01:00
David Woodhouse
03d6a2461a intel-iommu: Make iommu_flush_iotlb_psi() take pfn as argument
Most of its callers are having to shift for themselves anyway, so we might
as well do it in iommu_flush_iotlb_psi().

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:38:11 +01:00
David Woodhouse
88cb6a7424 intel-iommu: Change aligned_size() to aligned_nrpages()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:38:04 +01:00
David Woodhouse
b536d24d21 intel-iommu: Clean up intel_map_sg(), remove domain_page_mapping()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:35:06 +01:00
David Woodhouse
ad05122162 intel-iommu: Use domain_pfn_mapping() in intel_iommu_map_range()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:35:00 +01:00
David Woodhouse
0ab36de274 intel-iommu: Use domain_pfn_mapping() in __intel_map_single()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:34:24 +01:00
David Woodhouse
61df744314 intel-iommu: Introduce domain_pfn_mapping()
... and use it in the trivial cases; the other callers want individual
(and bisectable) attention, since I screwed them up the first time...

Make the BUG_ON() happen on too-large virtual address rather than
physical address, too. That's the one we care about.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:33:59 +01:00
David Woodhouse
1c5a46ed49 intel-iommu: Clean up address handling in domain_page_mapping()
No more masking and alignment; just use pfns.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:33:11 +01:00
David Woodhouse
b026fd28ea intel-iommu: Change addr_to_dma_pte() to pfn_to_dma_pte()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:32:26 +01:00
David Woodhouse
163cc52ccd intel-iommu: Clean up intel_iommu_unmap_range()
Use unaligned address for domain->max_addr. That algorithm isn't ideal
anyway -- we should probably just look at the last iova in the tree.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:31:12 +01:00
David Woodhouse
d794dc9b30 intel-iommu: Make dma_pte_free_pagetable() take pfns as argument
With some cleanup of intel_unmap_page(), intel_unmap_sg() and
vm_domain_exit() to no longer play with 64-bit addresses.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:30:45 +01:00
David Woodhouse
6660c63a79 intel-iommu: Make dma_pte_free_pagetable() use pfns
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:30:35 +01:00
David Woodhouse
595badf5d6 intel-iommu: Make dma_pte_clear_range() take pfns as argument
Noting that this is now an _inclusive_ range.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:28:10 +01:00
David Woodhouse
04b18e65dd intel-iommu: Make dma_pte_clear_range() use pfns
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 13:26:36 +01:00
David Woodhouse
66eae8469e intel-iommu: Don't just mask out too-big physical addresses; BUG() instead
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:38:42 +01:00
David Woodhouse
a75f7cf94f intel-iommu: Make dma_pte_clear_one() take pfn not address
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:38:40 +01:00
David Woodhouse
90dcfb5eb2 intel-iommu: Change dma_addr_level_pte() to dma_pfn_level_pte()
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:38:38 +01:00
David Woodhouse
77dfa56c94 intel-iommu: Change address_level_offset() to pfn_level_offset()
We're shifting the inputs for now, but that'll change...

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:38:32 +01:00
David Woodhouse
dd4e831960 intel-iommu: Change dma_set_pte_addr() to dma_set_pte_pfn()
Add some helpers for converting between VT-d and normal system pfns,
since system pages can be larger than VT-d pages.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:38:11 +01:00
David Woodhouse
c7ab48d2ac intel-iommu: Clean up identity mapping code, remove CONFIG_DMAR_GFX_WA
There's no need for the GFX workaround now we have 'iommu=pt' for the
cases where people really care about performance. There's no need to
have a special case for just one type of device.

This also speeds up the iommu=pt path and reduces memory usage by
setting up the si_domain _once_ and then using it for all devices,
rather than giving each device its own private page tables.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:37:44 +01:00
David Woodhouse
b213203e47 intel-iommu: Create new iommu_domain_identity_map() function
We'll want to do this to a _domain_ (the si_domain) rather than a PCI device.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:37:42 +01:00
Yu Zhao
bf92df30df intel-iommu: Only avoid flushing device IOTLB for domain ID 0 in caching mode
In caching mode, domain ID 0 is reserved for non-present to present
mapping flush. Device IOTLB doesn't need to be flushed in this case.

Previously we were avoiding the flush for domain zero, even if the IOMMU 
wasn't in caching mode and domain zero wasn't special.

Signed-off-by: Yu Zhao <yu.zhao@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-29 12:34:11 +01:00
Chris Wright
7e25a24229 intel-iommu: fix Identity Mapping to be arch independent
Drop the e820 scanning and use existing function for finding valid
RAM regions to add to 1:1 mapping.

Signed-off-by: Chris Wright <chrisw@redhat.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-26 11:26:27 +01:00
Fenghua Yu
2c2e2c389d IOMMU Identity Mapping Support (drivers/pci/intel_iommu.c)
Identity mapping for IOMMU defines a single domain to 1:1 map all PCI
devices to all usable memory.

This reduces map/unmap overhead in DMA API's and improve IOMMU
performance. On 10Gb network cards, Netperf shows no performance
degradation compared to non-IOMMU performance.

This method may lose some of DMA remapping benefits like isolation.

The patch sets up identity mapping for all PCI devices to all usable
memory. In the DMA API, there is no overhead to maintain page tables,
invalidate iotlb, flush cache etc.

32 bit DMA devices don't use identity mapping domain, in order to access
memory beyond 4GiB.

When kernel option iommu=pt, pass through is first tried. If pass
through succeeds, IOMMU goes to pass through. If pass through is not
supported in hw or fail for whatever reason, IOMMU goes to identity
mapping.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-06-23 22:07:54 +01:00
Linus Torvalds
687d680985 Merge git://git.infradead.org/~dwmw2/iommu-2.6.31
* git://git.infradead.org/~dwmw2/iommu-2.6.31:
  intel-iommu: Fix one last ia64 build problem in Pass Through Support
  VT-d: support the device IOTLB
  VT-d: cleanup iommu_flush_iotlb_psi and flush_unmaps
  VT-d: add device IOTLB invalidation support
  VT-d: parse ATSR in DMA Remapping Reporting Structure
  PCI: handle Virtual Function ATS enabling
  PCI: support the ATS capability
  intel-iommu: dmar_set_interrupt return error value
  intel-iommu: Tidy up iommu->gcmd handling
  intel-iommu: Fix tiny theoretical race in write-buffer flush.
  intel-iommu: Clean up handling of "caching mode" vs. IOTLB flushing.
  intel-iommu: Clean up handling of "caching mode" vs. context flushing.
  VT-d: fix invalid domain id for KVM context flush
  Fix !CONFIG_DMAR build failure introduced by Intel IOMMU Pass Through Support
  Intel IOMMU Pass Through Support

Fix up trivial conflicts in drivers/pci/{intel-iommu.c,intr_remapping.c}
2009-06-22 21:38:22 -07:00
Ingo Molnar
3d58f48ba0 Merge branch 'linus' into irq/numa
Conflicts:
	arch/mips/sibyte/bcm1480/irq.c
	arch/mips/sibyte/sb1250/irq.c

Merge reason: we gathered a few conflicts plus update to latest upstream fixes.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-06-01 21:06:21 +02:00
Yu Zhao
93a23a7271 VT-d: support the device IOTLB
Enable the device IOTLB (i.e. ATS) for both the bare metal and KVM
environments.

Signed-off-by: Yu Zhao <yu.zhao@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-18 14:46:26 +01:00
Yu Zhao
9dd2fe8906 VT-d: cleanup iommu_flush_iotlb_psi and flush_unmaps
Make iommu_flush_iotlb_psi() and flush_unmaps() more readable.

Signed-off-by: Yu Zhao <yu.zhao@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-18 14:46:00 +01:00
David Woodhouse
fd18de50b9 intel-iommu: PAE memory corruption fix
PAGE_MASK is 0xFFFFF000 on i386 -- even with PAE.

So it's not sufficient to ensure that you use phys_addr_t or uint64_t
everywhere you handle physical addresses -- you also have to avoid using
the construct 'addr & PAGE_MASK', because that will strip the high 32
bits of the address.

This patch avoids that problem by using PHYSICAL_PAGE_MASK instead of
PAGE_MASK where appropriate. It leaves '& PAGE_MASK' in a few instances
that don't matter -- where it's being used on the virtual bus addresses
we're dishing out, which are 32-bit anyway.

Since PHYSICAL_PAGE_MASK is not present on other architectures, we have
to define it (to PAGE_MASK) if it's not already defined.

Maybe it would be better just to fix PAGE_MASK for i386/PAE?

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-05-11 07:51:01 -07:00
David Woodhouse
c416daa98a intel-iommu: Tidy up iommu->gcmd handling
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-10 20:32:37 +01:00
David Woodhouse
462b60f6cc intel-iommu: Fix tiny theoretical race in write-buffer flush.
In iommu_flush_write_buffer() we read iommu->gcmd before taking the
register_lock, and then we mask in the WBF bit and write it to the
register.

There is a tiny chance that something else could have _changed_
iommu->gcmd before we take the lock, but after we read it. So we could
be undoing that change.

Never actually going to have happened in practice, since nothing else
changes that register at runtime -- aside from the write-buffer flush
it's only ever touched at startup for enabling translation, etc.

But worth fixing anyway.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-10 20:18:18 +01:00
David Woodhouse
1f0ef2aa18 intel-iommu: Clean up handling of "caching mode" vs. IOTLB flushing.
As we just did for context cache flushing, clean up the logic around
whether we need to flush the iotlb or just the write-buffer, depending
on caching mode.

Fix the same bug in qi_flush_iotlb() that qi_flush_context() had -- it
isn't supposed to be returning an error; it's supposed to be returning a
flag which triggers a write-buffer flush.

Remove some superfluous conditional write-buffer flushes which could
never have happened because they weren't for non-present-to-present
mapping changes anyway.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-10 19:58:49 +01:00
David Woodhouse
4c25a2c1b9 intel-iommu: Clean up handling of "caching mode" vs. context flushing.
It really doesn't make a lot of sense to have some of the logic to
handle caching vs. non-caching mode duplicated in qi_flush_context() and
__iommu_flush_context(), while the return value indicates whether the
caller should take other action which depends on the same thing.

Especially since qi_flush_context() thought it was returning something
entirely different anyway.

This patch makes qi_flush_context() and __iommu_flush_context() both
return void, removes the 'non_present_entry_flush' argument and makes
the only call site which _set_ that argument to 1 do the right thing.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-10 19:49:52 +01:00
Yu Zhao
fa3b6dcd52 VT-d: fix invalid domain id for KVM context flush
The domain->id is a sequence number associated with the KVM guest
and should not be used for the context flush. This patch replaces
the domain->id with a proper id value for both bare metal and KVM.

Signed-off-by: Yu Zhao <yu.zhao@intel.com>
Acked-by: Weidong Han <weidong.han@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-10 15:55:09 +01:00
Fenghua Yu
aed5d5f4c5 Fix !CONFIG_DMAR build failure introduced by Intel IOMMU Pass Through Support
This updated patch should fix the compiling errors and remove the extern
iommu_pass_through from drivers/pci/intel-iommu.c file.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-05-01 16:44:47 +01:00
Fenghua Yu
4ed0d3e6c6 Intel IOMMU Pass Through Support
The patch adds kernel parameter intel_iommu=pt to set up pass through
mode in context mapping entry. This disables DMAR in linux kernel; but
KVM still runs on VT-d and interrupt remapping still works.

In this mode, kernel uses swiotlb for DMA API functions but other VT-d
functionalities are enabled for KVM. KVM always uses multi level
translation page table in VT-d. By default, pass though mode is disabled
in kernel.

This is useful when people don't want to enable VT-d DMAR in kernel but
still want to use KVM and interrupt remapping for reasons like DMAR
performance concern or debug purpose.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Acked-by: Weidong Han <weidong@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-04-29 06:54:34 +01:00
Weidong Han
937582382c x86, intr-remap: enable interrupt remapping early
Currently, when x2apic is not enabled, interrupt remapping
will be enabled in init_dmars(), where it is too late to remap
ioapic interrupts, that is, ioapic interrupts are really in
compatibility mode, not remappable mode.

This patch always enables interrupt remapping before ioapic
setup, it guarantees all interrupts will be remapped when
interrupt remapping is enabled. Thus it doesn't need to set
the compatibility interrupt bit.

[ Impact: refactor intr-remap init sequence, enable fuller remap mode ]

Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>
Signed-off-by: Weidong Han <weidong.han@intel.com>
Acked-by: David Woodhouse <David.Woodhouse@intel.com>
Cc: iommu@lists.linux-foundation.org
Cc: allen.m.kay@intel.com
Cc: fenghua.yu@intel.com
LKML-Reference: <1239957736-6161-4-git-send-email-weidong.han@intel.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
2009-04-19 10:21:43 +02:00
Linus Torvalds
7b11428d37 Merge git://git.infradead.org/iommu-2.6
* git://git.infradead.org/iommu-2.6:
  intel-iommu: Avoid panic() for DRHD at address zero.
  Intel-IOMMU Alignment Issue in dma_pte_clear_range()
2009-04-13 11:35:50 -07:00
Yang Hongyang
284901a90a dma-mapping: replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)

Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 08:31:11 -07:00
Yang Hongyang
6a35528a83 dma-mapping: replace all DMA_64BIT_MASK macro with DMA_BIT_MASK(64)
Replace all DMA_64BIT_MASK macro with DMA_BIT_MASK(64)

Signed-off-by: Yang Hongyang<yanghy@cn.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2009-04-07 08:31:10 -07:00
Fenghua Yu
31d3568dfe Intel-IOMMU Alignment Issue in dma_pte_clear_range()
This issue was pointed out by Linus.

In dma_pte_clear_range() in intel-iommu.c

start = PAGE_ALIGN(start);
end &= PAGE_MASK;
npages = (end - start) / VTD_PAGE_SIZE;

In partial page case, start could be bigger than end and npages will be
negative.

Currently the issue doesn't show up as a real bug in because start and 
end have been aligned to page boundary already by all callers. So the 
issue has been hidden. But it is dangerous programming practice.

Signed-off-by: Fenghua Yu <fenghua.yu@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-04-06 14:47:00 -07:00
David Woodhouse
4958c5dc7b intel-iommu: Fix oops in device_to_iommu() when devices not found.
It's possible for a device in the drhd->devices[] array to be NULL if
it wasn't found at boot time, which means we have to check for that
case.

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-04-06 13:30:01 -07:00
David Woodhouse
276dbf9970 intel-iommu: Handle PCI domains appropriately.
We were comparing {bus,devfn} and assuming that a match meant it was the
same device. It doesn't -- the same {bus,devfn} can exist in
multiple PCI domains. Include domain number in device identification
(and call it 'segment' in most places, because there's already a lot of
references to 'domain' which means something else, and this code is
infected with ACPI thinking already).

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-04-04 10:43:31 +01:00
David Woodhouse
924b6231ed intel-iommu: Fix device-to-iommu mapping for PCI-PCI bridges.
When the DMAR table identifies that a PCI-PCI bridge belongs to a given
IOMMU, that means that the bridge and all devices behind it should be
associated with the IOMMU. Not just the bridge itself.

This fixes the device_to_iommu() function accordingly.

(It's broken if you have the same PCI bus numbers in multiple domains,
but this function was always broken in that way; I'll be dealing with
that later).

Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
2009-04-04 10:43:29 +01:00