Add device managed APIs for regmap_add_irq_chip() and
regmap_del_irq_chip() so that it can be managed by
device framework for freeing it.
This helps on following:
1. Maintaining the sequence of resource allocation and deallocation
regmap_add_irq_chip(&d);
devm_requested_threaded_irq(virq)
On free path:
regmap_del_irq_chip(d);
and then removing the irq registration.
On this case, regmap irq is deleted before the irq is free.
This force to use normal irq registration.
By using devm apis, the sequence can be maintain properly:
devm_regmap_add_irq_chip(&d);
devm_requested_threaded_irq(virq);
and resource deallocation will be done in reverse order
by device framework.
2. No need to delete the regmap_irq_chip in error path or remove
callback and hence there is less code on this path.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
commit 23b92e4cf5fd ("regmap: remove regmap_write_bits()")
removed regmap_write_bits(), but MFD driver was using it.
So, commit e30fccd6771d ("regmap: Keep regmap_write_bits()")
turns out it, but it is using original style.
This patch uses regmap_update_bits_base() for regmap_write_bits()
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
When nested interrupts are handled with regmap irq framework, we need to
mark the interrupts to be resend for pending interrupts on enable_irq.
Else the events might be lost for nested irqs.
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Tested-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch merges regmap_fields_update_bits() into macro
by using regmap_field_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch merges regmap_fields_write() into macro
by using regmap_fields_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch adds new regmap_fields_update_bits_base() which is using
regmap_update_bits_base().
Current regmap_fields_xxx() can be merged into it by macro.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch merges regmap_field_update_bits() into macro
by using regmap_field_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch merges regmap_field_write() into macro
by using regmap_field_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This patch adds new regmap_field_update_bits_base() which is using
regmap_update_bits_base().
Current regmap_field_xxx() can be merged into it by macro.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Current regmap has many similar update functions like below,
but the difference is very few.
regmap_update_bits()
regmap_update_bits_async()
regmap_update_bits_check()
regmap_update_bits_check_async()
Furthermore, we can add *force* write option too in the future.
This patch merges regmap_update_bits_check_async() into macro
by using regmap_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Current regmap has many similar update functions like below,
but the difference is very few.
regmap_update_bits()
regmap_update_bits_async()
regmap_update_bits_check()
regmap_update_bits_check_async()
Furthermore, we can add *force* write option too in the future.
This patch merges regmap_update_bits_check() into macro
by using regmap_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Current regmap has many similar update functions like below,
but the difference is very few.
regmap_update_bits()
regmap_update_bits_async()
regmap_update_bits_check()
regmap_update_bits_check_async()
Furthermore, we can add *force* write option too in the future.
This patch merges regmap_update_bits_async() into macro
by using regmap_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Current regmap has many similar update functions like below,
but the difference is very few.
regmap_update_bits()
regmap_update_bits_async()
regmap_update_bits_check()
regmap_update_bits_check_async()
Furthermore, we can add *force* write option too in the future.
This patch merges regmap_update_bits() into macro
by using regmap_update_bits_base().
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Current regmap has many similar update functions like below,
but the difference is very few.
regmap_update_bits()
regmap_update_bits_async()
regmap_update_bits_check()
regmap_update_bits_check_async()
Furthermore, we can add *force* write option too in the future.
This patch adds new regmap_update_bits_base() which is feature
merged function. Above functions can be merged into it by macro.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Here we introduce regcache_flat_get_index(), which using register
stride order and bit rotation, will save some memory spaces for
flat cache. Though this will also lost some access performance,
since the bit rotation is used to get the index of the cache array,
and this could be ingored for memory I/O accessing.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Here introduces regcache_get_index_by_order() for regmap cache,
which uses the register stride order and bit rotation, to improve
the performance.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Since the register stride should always equal to 2^N, and bit rotation is
much faster than multiplication and division. So introducing the stride
order and using bit rotation to get the offset of the register from the
index to improve the performance.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Add device managed APIs for regmap_add_irq_chip() and
regmap_del_irq_chip() so that it can be managed by
device framework for freeing it.
This helps on following:
1. Maintaining the sequence of resource allocation and deallocation
regmap_add_irq_chip(&d);
devm_requested_threaded_irq(virq)
On free path:
regmap_del_irq_chip(d);
and then removing the irq registration.
On this case, regmap irq is deleted before the irq is free.
This force to use normal irq registration.
By using devm apis, the sequence can be maintain properly:
devm_regmap_add_irq_chip(&d);
devm_requested_threaded_irq(virq);
and resource deallocation will be done in reverse order
by device framework.
2. No need to delete the regmap_irq_chip in error path or remove
callback and hence there is less code on this path.
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
It is require to dispose all virtual irq of hwirq on chip
created on given irq domain before removing this irq domain.
Hence dispose all mapped irqs before deleting the irq domains
in regmap_del_irq_chip();
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Reviewed-by: Javier Martinez Canillas <javier@osg.samsung.com>
Tested-by: Javier Martinez Canillas <javier@osg.samsung.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Commit 29bb45f25f (regmap-mmio: Use native endianness for read/write)
attempted to fix some long standing bugs in the MMIO implementation for
big endian systems caused by duplicate byte swapping in both regmap and
readl()/writel() which affected MIPS systems as when they are in big
endian mode they flip the endianness of all registers in the system, not
just the CPU. MIPS systems had worked around this by declaring regmap
using IPs as little endian which is inaccurate, unfortunately the issue
had not been reported.
Sadly the fix makes things worse rather than better. By changing the
behaviour to match the documentation it caused behaviour changes for
other IPs which broke them and by using the __raw I/O accessors to avoid
the endianness swapping in readl()/writel() it removed some memory
ordering guarantees and could potentially generate unvirtualisable
instructions on some architectures.
Unfortunately sorting out all this mess in any half way sensible fashion
was far too invasive to go in during an -rc cycle so instead let's go
back to the old broken behaviour for v4.5, the better fixes are already
queued for v4.6. This does mean that we keep the broken MIPS DTs for
another release but that seems the least bad way of handling the
situation.
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Mark Brown <broonie@kernel.org>
If we are unable to read the cache defaults for a regmap then fall back
on attempting to read them word by word. This is going to be painfully
slow for large regmaps but might be adequate for smaller ones.
Signed-off-by: Mark Brown <broonie@kernel.org>
[maciej: Use cache_bypass around read and skipping of unreadable regs]
Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Signed-off-by: Fabio Estevam <fabio.estevam@nxp.com>
Tested-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
regmaps without raw I/O access can't implement raw I/O operations,
return an error if someone tries to do that rather than crashing.
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently regmap-mmio uses the __raw accessors to read and write from
memory. This is not safe as these interact poorly with spinlocks and
are not guaranteed to generate emulated instructions on at least ARM
where regmap is commonly used. The APIs that are provided all provide
some byte swapping so this is difficult to do with the current
regmap-mmio implementation which attempts to use the regmap core byte
swapping.
We can fix this by modernising the MMIO implementation to use
reg_read() and reg_write() operations which were added after the API was
implemented and pass simple unsigned integers through to the bus, making
use of the formatting provided by the I/O accessors using a similar
pattern to that used by the core. This will be less efficient for block
I/O operations since we now enable and disable any required clocks per
register but it is not clear that any users of regmap-mmio actually use
block I/O and there is room to optimise later.
This removes support for big endian I/O on 64 bit registers since no I/O
accessors are provided, no current users were found and support can be
added easily once they are available.
In addition make the default endianness little endian. This was the
behaviour prior to 29bb45f25f (regmap-mmio: Use native endianness
for read/write) and is the behaviour desired by most existing users, the
users have been audited and those that need native endianness converted
to request it explicitly. Previously native was documented as the
default but due to the byte swapping in the accessors this was not
correctly implemented.
Fixes: 29bb45f25f (regmap-mmio: Use native endianness for read/write)
Reported-by: Johannes Berg <johannes@sipsolutions.net>
Tested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Mark Brown <broonie@kernel.org>
Currently the binding document says that if no endianness is configured
we use native endian but this is not in fact true for all binding types
and we do have some devices that really want native endianness such as
Broadcom MIPS SoCs where switching the endianness of the CPU also
switches the endianness of external IPs.
Provide an explicit option for this.
Signed-off-by: Mark Brown <broonie@kernel.org>
regcache_hw_init() uses regmap_raw_read() to initialize cache
when reg_defaults_raw isn't provided.
The last parameter to regmap_raw_read() is buffer size in bytes,
however regcache_hw_init() called it with number of registers
to read instead, which cause problem if they aren't one byte
wide in cache.
This wasn't triggered by any of current in-tree drivers
since they either have one-byte registers or provide
reg_defaults_raw explicitly.
Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Signed-off-by: Mark Brown <broonie@kernel.org>
Unlike the registers file we don't have any substantial performance
concerns rendering the entire file (it involves no device accesses) so
just use seq_printf() to simplify the code.
Signed-off-by: Mark Brown <broonie@kernel.org>
Some of devices supports the trigger level for interrupt
like rising/falling edge specially for GPIOs. The interrupt
support of such devices may have uses the generic regmap irq
framework for implementation.
Add support to configure the trigger type device interrupt
register via regmap-irq framework. The regmap-irq framework
configures the trigger register only if the details of trigger
type registers are provided.
[Fixed use of terery operator for legibility -- broonie]
Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The stride value should always equal to 2^n, so we can use bit
rotation instead of % to improve the performance.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
If the register defaults are provided by the driver without the
number by mistake, it should just return an error with one promotion.
This should be as early as possible, then there is no need to verify
the register defaults' stride and the other code followed.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
If there is no cache used for the drivers, the register defaults
or the register defaults raw are not need any more. This patch
will check this and print a warning.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
This new code is unreachable. Presumably there was supposed to be a
case statement there similar to the earlier code.
Fixes: afcc00b91f ('regmap: add 64-bit mode support')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
We should cast these to 64bit so that we don't truncate away the high
bits.
Fixes: afcc00b91f ('regmap: add 64-bit mode support')
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Since the mmio has support the 64-bit has been supported for the
64-bit platform, so should the regcache core too.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
There will be some warning like the following when checking new
patches near this code:
"WARNING: Missing a blank line after declarations"
This patch will suppress this warning.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The variable 'u64 *u64' should be only visible on 64-BIT platform.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Since the mmio has support the 64-bit has been supported for the
64-bit platform, so should the regmap core too.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Splite the minimal stride parsing into one signal function.
Signed-off-by: Xiubo Li <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Replace kmalloc with specialized function kmalloc_array when the size
is a multiplication of : number * size
Signed-off-by: lixiubo <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Replace kzalloc with specialized function kcalloc when the size is
a multiplication of : number * sizeof
Signed-off-by: lixiubo <lixiubo@cmss.chinamobile.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
A binary search is much more efficient rather than iterating
over the rbtree in ascending order which the current code is
doing.
During initialisation the reg defaults are written to the
cache in a large chunk and these are always sorted in the
ascending order so for this situation ideally we should have
iterated the rbtree in descending order.
But at runtime the drivers may write into the cache in any
random order so this patch selects to use a bsearch to give
an optimal runtime performance and also at initialisation
time when reg defaults are written the performance of binary
search would be much better than iterating in ascending order
which the current code was doing.
Signed-off-by: Nikesh Oswal <Nikesh.Oswal@wolfsonmicro.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
The regmap API has an endianness setting for formatting reads and writes.
This can be set by the usual DT "little-endian" and "big-endian" properties.
To work properly the associated regmap_bus needs to read/write in native
endian.
The "syscon" DT device binding creates an mmio-based regmap_bus which
performs all reads/writes as little-endian. These values are then converted
again by regmap, which means that all of the MIPS BCM boards (which are
big-endian) have been declared as "little-endian" to get regmap to convert
them back to big-endian.
Modify regmap-mmio to use the native-endian functions __raw_read*() and
__raw_write*() instead of the little-endian functions read*() and
write*().
Modify the big-endian MIPS BCM boards to use what will now be the correct
endianness instead of pretending that the devices are little-endian.
Signed-off-by: Simon Arlott <simon@fire.lp0.eu>
Signed-off-by: Mark Brown <broonie@kernel.org>