Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

AshishNamdev
Copy link
Owner

No description provided.

sshah-solarflare and others added 30 commits January 23, 2014 13:40
I will be taking over the work from Ben Hutchings.

Signed-off-by: Shradha Shah <[email protected]>
Acked-by: Ben Hutchings <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
As part of a workaround for a hardware erratum in the SFC9100 family
(SF bug 35388), the TX_DESC_UPD_DWORD register address is also used
for communicating with the event block, and only descriptor pointer
values < 2048 are valid.

If the TX DMA ring size is increased to 4096 descriptors (which the
firmware still allows) then we may write a descriptor pointer
value >= 2048, which has entirely different and undesirable effects!

Limit the TX DMA ring size correctly when this workaround is in
effect.

Fixes: 8127d66 ('sfc: Add support for Solarflare SFC9100 family')
Signed-off-by: Ben Hutchings <[email protected]>
Signed-off-by: Shradha Shah <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
After the option conversion downdelay and updelay divide a u64
and on a 32 bit this causes the following errors:
ERROR: "__udivdi3" [drivers/net/bonding/bonding.ko] undefined!
ERROR: "__umoddi3" [drivers/net/bonding/bonding.ko] undefined!

Fix it by using a normal int instead because newval->value is capped
at INT_MAX by the way the option is defined.

Signed-off-by: Nikolay Aleksandrov <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
o Use boolean type instead of u8.

Signed-off-by: Sucheta Chakraborty <[email protected]>
Signed-off-by: Jitendra Kalsaria <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
o Dump each Tx queue details with all descriptors, queue indices
  and Tx queue stats to imporve data colletion in situations
  where Tx timeout occurs.

Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
o Added hardware ops for interrupt enable/disable functions

Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: Shahed Shaikh <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Add support for MSI/MSI-X mode in poll controller routine.

Signed-off-by: Manish Chopra <[email protected]>
Signed-off-by: Shahed Shaikh <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
o Refactor configuration of interrupt coalescing parameters for
  all supported adapters.

Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
o Refactored MSI-x vector calculation for All adapters.
  Decoupled logic in the code which was using same call to
  request MSI-x vectors in default driver load, as well as
  during set_channel() operation for TSS/RSS. This refactoring
  simplifies code for TSS/RSS code path as well as probe path
  of the driver load for all adapters.

Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Signed-off-by: Himanshu Madhani <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
Himanshu Madhani says:

====================
qlcnic: Refactoring and enhancements for all adapters.

This patch series contains following patches.

o Enhanced debug data collection when we are in Tx-timeout situation.
o Enhanced MSI-x vector calculation for defualt load path as well
  as for TSS/RSS ring change path.
o Refactored interrupt coalescing code for all adapters.
o Refactored interrupt handling as well as cleanup of poll controller
  code patch for all adapters.
o changed rx_mac_learn type to boolean.

Please apply to net-next.
====================

Signed-off-by: Zoltan Kiss <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
This check is not needed because the same check is done before
fill_slave_info is used in rtnl_link_slave_info_fill.
Also, by removing this check, kernel will fillup IFLA_INFO_SLAVE_KIND
even for slaves of masters which does not implement fill_slave_info.

Signed-off-by: Jiri Pirko <[email protected]>
Signed-off-by: David S. Miller <[email protected]>
  drivers/staging/comedi/drivers/das6402.c: In function 'intr_handler':
  drivers/staging/comedi/drivers/das6402.c:164:3: error: implicit declaration of function 'outw_p' [-Werror=implicit-function-declaration]
  drivers/staging/speakup/speakup_dtlk.c: In function 'synth_probe':
  drivers/staging/speakup/speakup_dtlk.c:362:2: error: implicit declaration of function 'inw_p' [-Werror=implicit-function-declaration]

Signed-off-by: Geert Uytterhoeven <[email protected]>
Cc: Mikael Starvik <[email protected]>
Cc: Jesper Nilsson <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Sort the exception table at build-time rather than during boot.

Microblaze is the same case as AARCH64 that's why EM_MICROBLAZE
conditional check was added to allow cross-compilation on machines which
are not running the latest libc-dev.

Inspired by AARCH64 commit adace89 ("arm64: extable: sort the
exception table at build time").

Signed-off-by: Michal Simek <[email protected]>
Acked-by: David Daney <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Documentation/vm/locking is a blast from the past.  In the entire git
history, it has had precisely Three modifications.  Two of those look to
be pure renames, and the third was from 2005.

The doc contains such gems as:

> The page_table_lock is grabbed while holding the
> kernel_lock spinning monitor.

> Page stealers hold kernel_lock to protect against a bunch of
> races.

Or this which talks about mmap_sem:

> 4. The exception to this rule is expand_stack, which just
>    takes the read lock and the page_table_lock, this is ok
>    because it doesn't really modify fields anybody relies on.

expand_stack() doesn't take any locks any more directly, and the
mmap_sem acquisition was long ago moved up in to the page fault code
itself.

It could be argued that we need to rewrite this, but it is dangerous to
leave it as-is.  It will confuse more people than it helps.

Signed-off-by: Dave Hansen <[email protected]>
Cc: Hugh Dickins <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: Wanpeng Li <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
The "compressor" and "enabled" params are currently hidden, this changes
them to read-only, so userspace can tell if zswap is enabled or not and
see what compressor is in use.

Signed-off-by: Dan Streetman <[email protected]>
Cc: Vladimir Murzin <[email protected]>
Cc: Bob Liu <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Weijie Yang <[email protected]>
Acked-by: Seth Jennings <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
bad_page() is cool in that it prints out a bunch of data about the page.
But, I can never remember which page flags are good and which are bad,
or whether ->index or ->mapping is required to be NULL.

This patch allows bad/dump_page() callers to specify a string about why
they are dumping the page and adds explanation strings to a number of
places.  It also adds a 'bad_flags' argument to bad_page(), which it
then dumps out separately from the flags which are actually set.

This way, the messages will show specifically why the page was bad,
*specifically* which flags it is complaining about, if it was a page
flag combination which was the problem.

[[email protected]: switch to pr_alert]
Signed-off-by: Dave Hansen <[email protected]>
Reviewed-by: Christoph Lameter <[email protected]>
Cc: Andi Kleen <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Since commit ff6a6da ("mm: accelerate munlock() treatment of THP
pages") munlock skips tail pages of a munlocked THP page.  There is some
attempt to prevent bad consequences of racing with a THP page split, but
code inspection indicates that there are two problems that may lead to a
non-fatal, yet wrong outcome.

First, __split_huge_page_refcount() copies flags including PageMlocked
from the head page to the tail pages.  Clearing PageMlocked by
munlock_vma_page() in the middle of this operation might result in part
of tail pages left with PageMlocked flag.  As the head page still
appears to be a THP page until all tail pages are processed,
munlock_vma_page() might think it munlocked the whole THP page and skip
all the former tail pages.  Before ff6a6da, those pages would be
cleared in further iterations of munlock_vma_pages_range(), but NR_MLOCK
would still become undercounted (related the next point).

Second, NR_MLOCK accounting is based on call to hpage_nr_pages() after
the PageMlocked is cleared.  The accounting might also become
inconsistent due to race with __split_huge_page_refcount()

- undercount when HUGE_PMD_NR is subtracted, but some tail pages are
  left with PageMlocked set and counted again (only possible before
  ff6a6da)

- overcount when hpage_nr_pages() sees a normal page (split has already
  finished), but the parallel split has meanwhile cleared PageMlocked from
  additional tail pages

This patch prevents both problems via extending the scope of lru_lock in
munlock_vma_page().  This is convenient because:

- __split_huge_page_refcount() takes lru_lock for its whole operation

- munlock_vma_page() typically takes lru_lock anyway for page isolation

As this becomes a second function where page isolation is done with
lru_lock already held, factor this out to a new
__munlock_isolate_lru_page() function and clean up the code around.

[[email protected]: avoid a coding-style ugly]
Signed-off-by: Vlastimil Babka <[email protected]>
Cc: Sasha Levin <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Andrea Arcangeli <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
The vmalloc was introduced by 3332794 ("memcgroup: use vmalloc for
mem_cgroup allocation"), because at that time MAX_NUMNODES was used for
defining the per-node array in the mem_cgroup structure so that the
structure could be huge even if the system had the only NUMA node.

The situation was significantly improved by commit 45cf7eb ("memcg:
reduce the size of struct memcg 244-fold"), which made the size of the
mem_cgroup structure calculated dynamically depending on the real number
of NUMA nodes installed on the system (nr_node_ids), so now there is no
point in using vmalloc here: the structure is allocated rarely and on
most systems its size is about 1K.

Signed-off-by: Vladimir Davydov <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
stable_page_flags() checks !PageHuge && PageTransCompound && PageLRU to
know that a specified page is thp or not.  But sometimes it's not enough
and we fail to detect thp when the thp is on pagevec.  This happens only
for a few seconds after LRU list operations, but it makes it difficult
to control our applications depending on this flag.

So this patch adds another check PageAnon to detect thps on pagevec.  It
might not give the future extensibility for thp pagecache, but it's OK
at least for now.

Signed-off-by: Naoya Horiguchi <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: KOSAKI Motohiro <[email protected]>
Cc: Wu Fengguang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Most of the VM_BUG_ON assertions are performed on a page.  Usually, when
one of these assertions fails we'll get a BUG_ON with a call stack and
the registers.

I've recently noticed based on the requests to add a small piece of code
that dumps the page to various VM_BUG_ON sites that the page dump is
quite useful to people debugging issues in mm.

This patch adds a VM_BUG_ON_PAGE(cond, page) which beyond doing what
VM_BUG_ON() does, also dumps the page before executing the actual
BUG_ON.

[[email protected]: fix up includes]
Signed-off-by: Sasha Levin <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Currently kmem_cache_create_memcg() backoffs on failure inside
conditionals, without using gotos.  This results in the rollback code
duplication, which makes the function look cumbersome even though on
error we should only free the allocated cache.  Since in the next patch
I am going to add yet another rollback function call on error path
there, let's employ labels instead of conditionals for undoing any
changes on failure to keep things clean.

Signed-off-by: Vladimir Davydov <[email protected]>
Reviewed-by: Pekka Enberg <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
We do not free the cache's memcg_params if __kmem_cache_create fails.
Fix this.

Plus, rename memcg_register_cache() to memcg_alloc_cache_params(),
because it actually does not register the cache anywhere, but simply
initialize kmem_cache::memcg_params.

[[email protected]: fix build]
Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Currently, we have rather a messy function set relating to per-memcg
kmem cache initialization/destruction.

Per-memcg caches are created in memcg_create_kmem_cache().  This
function calls kmem_cache_create_memcg() to allocate and initialize a
kmem cache and then "registers" the new cache in the
memcg_params::memcg_caches array of the parent cache.

During its work-flow, kmem_cache_create_memcg() executes the following
memcg-related functions:

 - memcg_alloc_cache_params(), to initialize memcg_params of the newly
   created cache;
 - memcg_cache_list_add(), to add the new cache to the memcg_slab_caches
   list.

On the other hand, kmem_cache_destroy() called on a cache destruction
only calls memcg_release_cache(), which does all the work: it cleans the
reference to the cache in its parent's memcg_params::memcg_caches,
removes the cache from the memcg_slab_caches list, and frees
memcg_params.

Such an inconsistency between destruction and initialization paths make
the code difficult to read, so let's clean this up a bit.

This patch moves all the code relating to registration of per-memcg
caches (adding to memcg list, setting the pointer to a cache from its
parent) to the newly created memcg_register_cache() and
memcg_unregister_cache() functions making the initialization and
destruction paths look symmetrical.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Each root kmem_cache has pointers to per-memcg caches stored in its
memcg_params::memcg_caches array.  Whenever we want to allocate a slab
for a memcg, we access this array to get per-memcg cache to allocate
from (see memcg_kmem_get_cache()).  The access must be lock-free for
performance reasons, so we should use barriers to assert the kmem_cache
is up-to-date.

First, we should place a write barrier immediately before setting the
pointer to it in the memcg_caches array in order to make sure nobody
will see a partially initialized object.  Second, we should issue a read
barrier before dereferencing the pointer to conform to the write
barrier.

However, currently the barrier usage looks rather strange.  We have a
write barrier *after* setting the pointer and a read barrier *before*
reading the pointer, which is incorrect.  This patch fixes this.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
All caches of the same memory cgroup are linked in the memcg_slab_caches
list via kmem_cache::memcg_params::list.  This list is traversed, for
example, when we read memory.kmem.slabinfo.

Since the list actually consists of memcg_cache_params objects, we have
to convert an element of the list to a kmem_cache object using
memcg_params_to_cache(), which obtains the pointer to the cache from the
memcg_params::memcg_caches array of the corresponding root cache.  That
said the pointer to a kmem_cache in its parent's memcg_params must be
initialized before adding the cache to the list, and cleared only after
it has been unlinked.  Currently it is vice-versa, which can result in a
NULL ptr dereference while traversing the memcg_slab_caches list.  This
patch restores the correct order.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
We obtain a per-memcg cache from a root kmem_cache by dereferencing an
entry of the root cache's memcg_params::memcg_caches array.  If we find
no cache for a memcg there on allocation, we initiate the memcg cache
creation (see memcg_kmem_get_cache()).  The cache creation proceeds
asynchronously in memcg_create_kmem_cache() in order to avoid lock
clashes, so there can be several threads trying to create the same
kmem_cache concurrently, but only one of them may succeed.  However, due
to a race in the code, it is not always true.  The point is that the
memcg_caches array can be relocated when we activate kmem accounting for
a memcg (see memcg_update_all_caches(), memcg_update_cache_size()).  If
memcg_update_cache_size() and memcg_create_kmem_cache() proceed
concurrently as described below, we can leak a kmem_cache.

Asume two threads schedule creation of the same kmem_cache.  One of them
successfully creates it.  Another one should fail then, but if
memcg_create_kmem_cache() interleaves with memcg_update_cache_size() as
follows, it won't:

  memcg_create_kmem_cache()             memcg_update_cache_size()
  (called w/o mutexes held)             (called with slab_mutex,
                                         set_limit_mutex held)
  -------------------------             -------------------------

  mutex_lock(&memcg_cache_mutex)

                                        s->memcg_params=kzalloc(...)

  new_cachep=cache_from_memcg_idx(cachep,idx)
  // new_cachep==NULL => proceed to creation

                                        s->memcg_params->memcg_caches[i]
                                            =cur_params->memcg_caches[i]

  // kmem_cache_create_memcg takes slab_mutex
  // so we will hang around until
  // memcg_update_cache_size finishes, but
  // nothing will prevent it from succeeding so
  // memcg_caches[idx] will be overwritten in
  // memcg_register_cache!

  new_cachep = kmem_cache_create_memcg(...)
  mutex_unlock(&memcg_cache_mutex)

Let's fix this by moving the check for existence of the memcg cache to
kmem_cache_create_memcg() to be called under the slab_mutex and make it
return NULL if so.

A similar race is possible when destroying a memcg cache (see
kmem_cache_destroy()).  Since memcg_unregister_cache(), which clears the
pointer in the memcg_caches array, is called w/o protection, we can race
with memcg_update_cache_size() and omit clearing the pointer.  Therefore
memcg_unregister_cache() should be moved before we release the
slab_mutex.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
kmem_cache_dup() is only called from memcg_create_kmem_cache().  The
latter, in fact, does nothing besides this, so let's fold
kmem_cache_dup() into memcg_create_kmem_cache().

This patch also makes the memcg_cache_mutex private to
memcg_create_kmem_cache(), because it is not used anywhere else.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
There is no point in flooding logs with warnings or especially crashing
the system if we fail to create a cache for a memcg.  In this case we
will be accounting the memcg allocation to the root cgroup until we
succeed to create its own cache, but it isn't that critical.

Signed-off-by: Vladimir Davydov <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: Christoph Lameter <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
torvalds and others added 8 commits January 25, 2014 13:18
…/git/broonie/regmap

Pull regmap updates from Mark Brown:
 "Nothing terribly exciting with regmap this release, mainly a few small
  extensions to allow more devices to be supported:

   - Allow the bulk I/O APIs to be used with no-bus regmaps
   - Support interrupt controllers with zero ack base
   - Warning and spelling fixes"

* tag 'regmap-v3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
  regmap: fix a couple of typos
  regmap: Allow regmap_bulk_write() to work for "no-bus" regmaps
  regmap: Allow regmap_bulk_read() to work for "no-bus" regmaps
  regmap: irq: Allow using zero value for ack_base
  regmap: Fix 'ret' would return an uninitialized value
…ernel/git/broonie/regulator

Pull regulator updates from Mark Brown:
 "A respin of the merges in the previous pull request with one extra
  fix.

  A quiet release for the regulator API, quite a large number of small
  improvements all over but other than the addition of new drivers for
  the AS3722 and MAX14577 there is nothing of substantial non-local
  impact"

* tag 'regulator-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator: (47 commits)
  regulator: pfuze100-regulator: Improve dev_info() message
  regulator: pfuze100-regulator: Fix some checkpatch complaints
  regulator: twl: Fix checkpatch issue
  regulator: core: Fix checkpatch issue
  regulator: anatop-regulator: Remove unneeded memset()
  regulator: s5m8767: Update LDO index in s5m8767-regulator.txt
  regulator: as3722: set enable time for SD0/1/6
  regulator: as3722: detect SD0 low-voltage mode
  regulator: tps62360: Fix up a pointer-integer size mismatch warning
  regulator: anatop-regulator: Remove unneeded kstrdup()
  regulator: act8865: Fix build error when !OF
  regulator: act8865: register all regulators regardless of how many are used
  regulator: wm831x-dcdc: Remove unneeded 'err' label
  regulator: anatop-regulator: Add MODULE_ALIAS()
  regulator: act8865: fix incorrect devm_kzalloc for act8865
  regulator: act8865: Remove set_suspend_[en|dis]able implementation
  regulator: act8865: Remove unneeded regulator_unregister() calls
  regulator: s2mps11: Clean up redundant code
  regulator: tps65910: Simplify setting enable_mask for regulators
  regulator: act8865: add device tree binding doc
  ...
…git/broonie/spi

Pull spi updates from Mark Brown:
 "A respun version of the merges for the pull request previously sent
  with a few additional fixes.  The last two merges were fixed up by
  hand since the branches have moved on and currently have the prior
  merge in them.

  Quite a busy release for the SPI subsystem, mostly in cleanups big and
  small scattered through the stack rather than anything else:

   - New driver for the Broadcom BC63xx HSSPI controller
   - Fix duplicate device registration for ACPI
   - Conversion of s3c64xx to DMAEngine (this pulls in platform and DMA
     changes upon which the transiton depends)
   - Some small optimisations to reduce the amount of time we hold locks
     in the datapath, eliminate some redundant checks and the size of a
     spi_transfer
   - Lots of fixes, cleanups and general enhancements to drivers,
     especially the rspi and Atmel drivers"

* tag 'spi-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi: (112 commits)
  spi: core: Fix transfer failure when master->transfer_one returns positive value
  spi: Correct set_cs() documentation
  spi: Clarify transfer_one() w.r.t. spi_finalize_current_transfer()
  spi: Spelling s/finised/finished/
  spi: sc18is602: Convert to use bits_per_word_mask
  spi: Remove duplicate code to set default bits_per_word setting
  spi/pxa2xx: fix compilation warning when !CONFIG_PM_SLEEP
  spi: clps711x: Add MODULE_ALIAS to support module auto-loading
  spi: rspi: Add missing clk_disable() calls in error and cleanup paths
  spi: rspi: Spelling s/transmition/transmission/
  spi: rspi: Add support for specifying CPHA/CPOL
  spi/pxa2xx: initialize DMA channels to -1 to prevent inadvertent match
  spi: rspi: Add more QSPI register documentation
  spi: rspi: Add more RSPI register documentation
  spi: rspi: Remove dependency on DMAE for SHMOBILE
  spi/s3c64xx: Correct indentation
  spi: sh: Use spi_sh_clear_bit() instead of open-coded
  spi: bitbang: Grammar s/make to make/to make/
  spi: sh-hspi: Spelling s/recive/receive/
  spi: core: Improve tx/rx_nbits check comments
  ...
This patch proposes to remove the use of the IRQF_DISABLED flag

It's a NOOP since 2.6.35 and it will be removed one day.

Signed-off-by: Michael Opdenacker <[email protected]>
Signed-off-by: Corey Minyard <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Use USEC_PER_SEC instead of 1000000, that making the later bugfix
more clearly.

Signed-off-by: Xie XiuQi <[email protected]>
Signed-off-by: Corey Minyard <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Loading ipmi_si module while bmc is disconnected, we found the timeout
is longer than 5 secs.  Actually it takes about 3 mins and 20
secs.(HZ=250)

error message as below:
  Dec 12 19:08:59 linux kernel: IPMI BT: timeout in RD_WAIT [ ] 1 retries left
  Dec 12 19:08:59 linux kernel: BT: write 4 bytes seq=0x01 03 18 00 01
  [...]
  Dec 12 19:12:19 linux kernel: IPMI BT: timeout in RD_WAIT [ ]
  Dec 12 19:12:19 linux kernel: failed 2 retries, sending error response
  Dec 12 19:12:19 linux kernel: IPMI: BT reset (takes 5 secs)
  Dec 12 19:12:19 linux kernel: IPMI BT: flag reset [ ]

Function wait_for_msg_done() use schedule_timeout_uninterruptible(1) to
sleep 1 tick, so we should subtract jiffies_to_usecs(1) instead of 100
usecs from timeout.

Reported-by: Hu Shiyuan <[email protected]>
Signed-off-by: Xie XiuQi <[email protected]>
Signed-off-by: Corey Minyard <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Return proper errors for a lot of IPMI failure cases.  Also call
pci_disable_device when IPMI PCI devices are removed.

Signed-off-by: Corey Minyard <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Merge ipmi fixes from Corey Minyard:
 "Just some collected fixes for 3.14.  Nothing huge"

* emailed patches from Corey Minyard <[email protected]>:
  ipmi: Cleanup error return
  ipmi: fix timeout calculation when bmc is disconnected
  ipmi: use USEC_PER_SEC instead of 1000000 for more meaningful
  ipmi: remove deprecated IRQF_DISABLED
@AshishNamdev AshishNamdev merged commit 8ed87cd into AshishNamdev:master Jan 26, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.