We only need to handle d0i3 entry and exit during suspend resume if
system_pm is set to IWL_PLAT_PM_MODE_D0I3, otherwise d0i3 entry
failures will cause suspend to fail.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=194791
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Now that we have 512 queues, add a wait for single TX
queue to gen2.
This replaces gen1 wait_tx_queues_empty, which was limited
to 32 queues.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In gen2, page dumping needs to be done in the trans
layer, as it is the one with access to the paging
pointers.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Rename current wait_tx_queue_empty to wait_tx_queues_empty since
it waits for multiple queues (up to 32).
Next patch will add a wait for single TX queue which is needed for
gen2 to be scalable for 512.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This struct member is already assigned in the previous
call to iwl_trans_alloc(), so assigning the same value
again is superfluous - remove it.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Change queue allocation to be dynamic. On transport init only
the command queue is being allocated. Other queues are allocated
on demand.
This is due to the huge amount of queues we will soon enable (512)
and as a preparation for TX Virtual Queue Manager feature (TVQM),
where firmware will assign the actual queue number on demand.
This includes also allocation of the byte count table per queue
and not as a contiguous chunk of memory.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This function is basically the same as gen1, except for clean
ups of old devices configuration that are never used in a000
configuration.
It will also help with refactoring rf_kill later on.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In a000 transport we will allocate queues dynamically.
Right now queue are allocated as one big chunk of memory
and accessed as such.
The dynamic allocation of the queues will require accessing
the queues as pointers.
In order to keep simplicity of pre-a000 tx queues handling,
keep allocating and freeing the memory in the same style,
but move to access the queues in the various functions as
individual pointers.
Dynamic allocation for the a000 devices will be in a separate
patch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
New transport will be used only by op modes that supports
buffer station offload - hence those will never be called.
Clean it up.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Code is basically the same, with a cleanups of old narrow host
command, ampg workarounds, some cosmetic stuff, and usage of
TFH functions when accessing TFD queues.
This enables also the cleanup of iwl_pcie_tfd_set_tb() since
now it won't be called anywhere in the a000 data path
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This is just a copy-paste in order to make changes tracking
easier.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In a000 devices the TX handling is different in a few ways:
* Queues are allocated dynamically
* DQA is enabled by default
* Driver shouldn't access TFH registers - ucode configures it
all in SCD_QUEUE_CFG command
Support all this in a new API with op mode, where op mode sends
the command, transport will allocate the queue dynamically, fill
in DMA properties, send the command to FW and get the ID back.
Current implementation only sets the new transport API and fills
the DMA properties.
Future patches will complete the other parts.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Context information structure is going to be used in a000
devices for firmware self init.
The self init includes firmware self loading from DRAM by
ROM.
This means the TFH relevant firmware loading can be cleaned up.
The firmware loading includes the paging memory as well, so op
mode can stop initializing the paging and sending the DRAM_BLOCK_CMD.
Firmware is doing RFH, TFH and SCD configuration, while driver
only fills the required configurations and addresses in the
context information structure.
The only remaining access to RFH is the write pointer, which
is updated upon alive interrupt after FW configured the RFH.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
a000 devices are going to have a lot of flows simplified
and changed: init flow, RX, TX, and more.
This, combined with the fact that code is already very
complicated due to backward compatibility - introduce
a split that will enable to introduce simplified version
of functions.
Shared ops are moved to a macro, while functions that will
be updated in the next patches are defined twice for now.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We don't need this parameter anymore, since we always pass 0 anyway.
Remove it from the structure and from all the relevant functions.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This register is helpful for debugging D3 issues.
Driver turns all bits on, and then on exit reads the
updated value there.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
This reverts commit 8aacf4b73fe8 ("iwlwifi: introduce trans API
to get byte count table").
The commit is not needed as a better approach will be taken.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
We don't need to print so much data in the kernel log.
Limit the data to be printed to the queue that actually
got stuck in case of a TFD queue hang, and stop dumping
all the CSR and FH registers. Over the course of time, the
CSR and FH values haven't proven themselves to be really
useful for debugging, and they are now in the firmware dump
anyway.
This comes as a preparation to the addition of more data
required to be printed by the firwmare team.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
When getting RF_KILL and disabling radio, the device gets stopped
and reset. This erases the IVAR table that matches the interrupt
to its cause, and is essential for MSIX proper functionality.
Till now, the table wasn't re-configured after the reset, and
therefore the interrupt that enabled radio didn't fire on the
right irq, and the driver didn't handle it correctly.
To fix this, configure the IVAR table again after resetting the
device.
Signed-off-by: Golan Ben-Ami <golan.ben.ami@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
During the suspend/resume flow some HW blocks are reset. This causes
the IVAR table to be completely erased. This table is where interrupt
causes are bound to specific IRQs. When the table is empty the
interrupt handlers are not called correctly. Fix this by reconfiguring
the IVAR table after resume.
Fixes: 2e5d4a8f61dc ("iwlwifi: pcie: Add new configuration to enable MSIX")
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The MSIX configuration flow includes two different stages:
configuring the HW by writing to the IVAR table and configuring the SW
to reflect the HW configuration.
The HW configuration is needed on each HW reset,
whereas the SW configuration is only needed during the init flow.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
msix configuration functions should be called by other functions.
For example by pcie_d3_resume, move it above to enable it.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Our 9000 device supports 64 bit DMA address for RX only, and
not for TX.
Setting DMA mask to 64 for the whole device is erroneous - we
can do it only for a000 devices where device is capable of
both RX & TX DMA with 64 bit address space.
Fixes: 96a6497bc3ed ("iwlwifi: pcie: add 9000 series multi queue rx DMA support")
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
shift_param is defined and set in iwl_pcie_load_cpu_sections but not
used. Fix this to avoid -Wunused-but-set-variable warning.
The code using it turned into dead code with commit dcab8ecd5617
("iwlwifi: mvm: support ucode load for family_8000 B0 only") which
added a separate function iwl_pcie_load_given_ucode_8000 (then 8000b)
for IWL_DEVICE_FAMILY_8000. Commit 76f8c0e17edc ("iwlwifi: pcie:
remove dead code") removed the dead code but left shift_param as is.
iwlwifi/pcie/trans.c: In function ‘iwl_pcie_load_cpu_sections’:
iwlwifi/pcie/trans.c:871:6: warning: variable ‘shift_param’ set but not used [-Wunused-but-set-variable]
Fixes: dcab8ecd5617 ("iwlwifi: mvm: support ucode load for family_8000 B0 only")
Fixes: 76f8c0e17edc ("iwlwifi: pcie: remove dead code")
Signed-off-by: Kirtika Ruchandani <kirtika@google.com>
Cc: Sara Sharon <sara.sharon@intel.com>
Cc: Luca Coelho <luciano.coelho@intel.com>
Cc: Liad Kaufman <liad.kaufman@intel.com>
Cc: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
[removed some unnecessary braces]
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The maximum number of firmware sections is now 32 instead of 16 for
a000 devices. Set the appropriate define. Avoid out of bounds access
in case there are more sections than the maximum set by driver.
Make the driver extensible to FW size changes by allocating the
section memory dynamically.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Instead of passing DRV_NAME pass a string that
represents the reason for the interrupt.
Signed-off-by: Sharon Dvir <sharon.dvir@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Due to firmware design considerations, move to wide ID for
all commands.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In MSIX mode the number of irq depends on the number of
possible cpus existing on the host.
This cause to bug in case there are offline cores.
Take into account only the online CPUs instead.
Also save it in temporary variable.
Fixes: commit 2e5d4a8f61dc ("iwlwifi: pcie: Add new configuration to enable MSIX")
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Function is very indented. Go to msi section if needed to avoid
it and by that make the code more readable.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In order to utilize the host's CPUs in the most efficient way
we bind each rx interrupt vector to each CPU on the host.
Each rx interrupt is prioritized to execute only on the designated CPU
rather than any CPU.
Processor affinity takes advantage of the fact that some remnants of
a process that was run on a given processor may remain in that
processor's memory state for example, data in the CPU cache after
another process is run on that CPU. Scheduling that process to execute
on the same processor could result in an efficient use of process by
reducing performance-degrading situations such as cache misses
and parallel processing.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In case the OS provides fewer interrupts than requested, different
causes will share the same interrupt vector as follow:
1.One interrupt less: non rx causes shared with FBQ.
2.Two interrupts less: non rx causes shared with FBQ and RSS.
3.More than two interrupts: we will use fewer RSS queues.
Also make the request depend on the number of online CPUs
instead of possible CPUs.
Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The original intent was to have the general iwl_queue shared
between RX and TX queues, but it is not the actual status.
Since it is not shared with any struct but iwl_txq, it adds
unnecessary complexity. Merge those structs.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Previous patch introduced the new formats. This patch
allocates the new structures and adjusts code accordingly.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
In future HW the byte count table address will be configured
by ucode per queue. Add API to expose the byte count table to
the opmode
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
New hardware supports bigger TFDs and TBs.
Introduce the new formats and adjust defines and code
relying on old format.
Changing the actual TFD allocation is trickier and
deferred to the next patch.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
If device family is 8000 then iwl_pcie_load_cpu_sections()
won't be called at all (iwl_pcie_load_cpu_sections_8000() is
called in that case) so this piece of code never gets called.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Turns out we should access TFH relative addresses.
Also, the FH_UCODE_LOAD_STATUS was replaced by
UREG_UCODE_LOAD_STATUS.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Up till now we accessed SCD configuration only for initial
configuration and for enabling command queue.
For a000 generation the command queue is open by default
and firmware configures the rest. No driver SCD accesses
are expected. Make sure this is the case.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Centralize the logging of SCD status. The motivation is
that for a000 devices we will have new SCD HW, but this
code was duplicate anyway, so it is a proper cleanup.
Signed-off-by: Sara Sharon <sara.sharon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Upon firmware load interrupt (FH_TX), the ISR re-enables the
firmware load interrupt only to avoid races with other
flows as described in the commit below. When the firmware
is completely loaded, the thread that is loading the
firmware will enable all the interrupts to make sure that
the driver gets the ALIVE interrupt.
The problem with that is that the thread that is loading
the firmware is actually racing against the ISR and we can
get to the following situation:
CPU0 CPU1
iwl_pcie_load_given_ucode
...
iwl_pcie_load_firmware_chunk
wait_for_interrupt
<interrupt>
ISR handles CSR_INT_BIT_FH_TX
ISR wakes up the thread on CPU0
/* enable all the interrupts
* to get the ALIVE interrupt
*/
iwl_enable_interrupts
ISR re-enables CSR_INT_BIT_FH_TX only
/* start the firmware */
iwl_write32(trans, CSR_RESET, 0);
BUG! ALIVE interrupt will never arrive since it has been
masked by CPU1.
In order to fix that, change the ISR to first check if
STATUS_INT_ENABLED is set. If so, re-enable all the
interrupts. If STATUS_INT_ENABLED is clear, then we can
check what specific interrupt happened and re-enable only
that specific interrupt (RFKILL or FH_TX).
All the credit for the analysis goes to Kirtika who did the
actual debugging work.
Cc: <stable@vger.kernel.org> [4.5+]
Fixes: a6bd005fe92 ("iwlwifi: pcie: fix RF-Kill vs. firmware load race")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The PCIe transport needs to store two pointers in each TX SKB, and
currently assumes mac80211's ieee80211_tx_info is present in the CB
to do that.
In order to remove that assumption, have the opmodes pass in the
offset to where the pointers can be stored in the CB and use the
offset in the PCIe code.
To make the disentanglement complete, remove mac80211.h includes
from everywhere in the generic iwlwifi code. This required adding
an include of cfg80211.h in one place.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
On some of the chipsets MSI & INTA interrupts are disabled by default in
the HW registers, and need to be explicitly enabled to be used.
In case MSI-X isn't used, make sure MSI mode is enabled by setting
the relevant HW register.
Signed-off-by: Ido Yariv <idox.yariv@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
The NIC's CPU gets started after the firmware has been
written to its memory. The first thing it does is to
send an interrupt to let the driver know that it is
running. In order to get that interrupt, the driver needs
to make sure it is not masked. Of course, the interrupt
needs to be enabled in the driver before the CPU starts to
run.
I mistakenly inversed those two steps leading to races
which prevented the driver from getting the alive interrupt
from the firmware.
Fix that.
Cc: <stable@vger.kernel.org> [4.5+]
Fixes: a6bd005fe92 ("iwlwifi: pcie: fix RF-Kill vs. firmware load race")
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Support DQA queue sharing when no free queue exists for
allocation to a STA that already exists. This means that
a single queue will serve more than a single TID (although
the RA will be the same for all TIDs served).
We try to choose the lowest AC possible, to ensure the
shared queues have the lowest possible combined AC
requirements. The queue to share is chosen only from the
same RA's DATA queues as follows (in descending priority):
1. An AC_BE queue
2. Same AC queue
3. Highest AC queue that is lower than new AC
4. Any existing AC (there always is at least 1 DATA queue)
If any aggregations existed for any of the TIDs of the
shared queue - they are stopped (the FW is notified), but
no delBA is sent.
Signed-off-by: Liad Kaufman <liad.kaufman@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>