| Age | Commit message (Collapse) | Author | Files | Lines |
|
The partial block length returned by a block-only driver should
not be passed up to the caller since ahash itself deals with the
partial block data.
Set err to zero in ahash_update_finish if it was positive.
Reported-by: T Pratham <t-pratham@ti.com>
Tested-by: T Pratham <t-pratham@ti.com>
Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Restore the partial block buffer in crypto_ahash_import by copying
it. Check whether the partial block buffer exceeds the maximum
size and return -EOVERFLOW if it does.
Zero the partial block buffer in crypto_ahash_import_core.
Reported-by: T Pratham <t-pratham@ti.com>
Tested-by: T Pratham <t-pratham@ti.com>
Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
It's no longer required to use nth_page() when iterating pages within a
single SG entry, so let's drop the nth_page() usage.
Link: https://lkml.kernel.org/r/20250901150359.867252-34-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto update from Herbert Xu:
"API:
- Allow hash drivers without fallbacks (e.g., hardware key)
Algorithms:
- Add hmac hardware key support (phmac) on s390
- Re-enable sha384 in FIPS mode
- Disable sha1 in FIPS mode
- Convert zstd to acomp
Drivers:
- Lower priority of qat skcipher and aead
- Convert aspeed to partial block API
- Add iMX8QXP support in caam
- Add rate limiting support for GEN6 devices in qat
- Enable telemetry for GEN6 devices in qat
- Implement full backlog mode for hisilicon/sec2"
* tag 'v6.17-p1' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (116 commits)
crypto: keembay - Use min() to simplify ocs_create_linked_list_from_sg()
crypto: hisilicon/hpre - fix dma unmap sequence
crypto: qat - make adf_dev_autoreset() static
crypto: ccp - reduce stack usage in ccp_run_aes_gcm_cmd
crypto: qat - refactor ring-related debug functions
crypto: qat - fix seq_file position update in adf_ring_next()
crypto: qat - fix DMA direction for compression on GEN2 devices
crypto: jitter - replace ARRAY_SIZE definition with header include
crypto: engine - remove {prepare,unprepare}_crypt_hardware callbacks
crypto: engine - remove request batching support
crypto: qat - flush misc workqueue during device shutdown
crypto: qat - enable rate limiting feature for GEN6 devices
crypto: qat - add compression slice count for rate limiting
crypto: qat - add get_svc_slice_cnt() in device data structure
crypto: qat - add adf_rl_get_num_svc_aes() in rate limiting
crypto: qat - relocate service related functions
crypto: qat - consolidate service enums
crypto: qat - add decompression service for rate limiting
crypto: qat - validate service in rate limiting sysfs api
crypto: hisilicon/sec2 - implement full backlog mode for sec
...
|
|
Make the hash walk functions
crypto_hash_walk_done()
crypto_hash_walk_first()
crypto_hash_walk_last()
public again.
These functions had been removed from the header file
include/crypto/internal/hash.h with commit 7fa481734016
("crypto: ahash - make hash walk functions private to ahash.c")
as there was no crypto algorithm code using them.
With the upcoming crypto implementation for s390 phmac
these functions will be exploited and thus need to be
public within the kernel again.
Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
Acked-by: Holger Dengler <dengler@linux.ibm.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Ensure that drivers that have not been converted to the ahash API
do not use the ahash_request_set_virt fallback path as they cannot
use the software fallback.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Invoke the final function directly in the default finup implementation
since crypto_ahash_final is now just a wrapper around finup.
Reported-by: Eric Biggers <ebiggers@kernel.org>
Fixes: 9d7a0ab1c753 ("crypto: ahash - Handle partial blocks in API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some drivers cannot have a fallback, e.g., because the key is held
in hardware. Allow these to be used with ahash by adding the bit
CRYPTO_ALG_NO_FALLBACK.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Harald Freudenberger <freude@linux.ibm.com>
|
|
Add ahash support to hmac so that drivers that can't do hmac in
hardware do not have to implement duplicate copies of hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Make reqsize static for shash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Provide an option to handle the partial blocks in the ahash API.
Almost every hash algorithm has a block size and are only able
to hash partial blocks on finalisation.
As a first step disable virtual address support for algorithms
with state sizes larger than HASH_MAX_STATESIZE. This is OK as
virtual addresses are currently only used on synchronous fallbacks.
This means ahash_do_req_chain only needs to handle synchronous
fallbacks, removing the complexities of saving the request state.
Also move the saved request state into the ahash_request object
as nesting is no longer possible.
Add a scatterlist to ahash_request to store the partial block.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add export_core and import_core hooks. These are intended to be
used by algorithms which are wrappers around block-only algorithms,
but are not themselves block-only, e.g., hmac.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add crypto_ahash_export_core and crypto_ahash_import_core. For
now they only differ from the normal export/import functions when
going through shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As sync ahash algorithms (currently there are none) are used without
a fallback, ensure that they obey the MAX_SYNC_HASH_REQSIZE rule
just like shash algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As chaining has been removed, all that remains of REQ_CHAIN is
just virtual address support. Rename it before the reintroduction
of batching creates confusion.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Do not copy the exit function in crypto_clone_tfm as it should
only be set after init_tfm or clone_tfm has succeeded.
Move the setting into crypto_clone_ahash and crypto_clone_shash
instead.
Also clone the fb if necessary.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a helper to clone crypto requests and eliminate code duplication.
Use kmemdup in the helper.
Also add an fb field to crypto_tfm.
This also happens to fix the existing implementations which were
buggy.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504230118.1CxUaUoX-lkp@intel.com/
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202504230004.c7mrY0C6-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Allow any ahash to be used with a stack request, with optional
dynamic allocation when async is needed. The intended usage is:
HASH_REQUEST_ON_STACK(req, tfm);
...
err = crypto_ahash_digest(req);
/* The request cannot complete synchronously. */
if (err == -EAGAIN) {
/* This will not fail. */
req = HASH_REQUEST_CLONE(req, gfp);
/* Redo operation. */
err = crypto_ahash_digest(req);
}
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If the bit CRYPTO_ALG_DUP_FIRST is set, an algorithm will be
duplicated by kmemdup before registration. This is inteded for
hardware-based algorithms that may be unplugged at will.
Do not use this if the algorithm data structure is embedded in a
bigger data structure. Perform the duplication in the driver
instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the common reqsize field and remove reqsize from ahash_alg.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Request chaining requires the user to do too much book keeping.
Remove it from ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Disable hash request chaining in case a driver that copies an
ahash_request object by hand accidentally triggers chaining.
Reported-by: Manorit Chawdhry <m-chawdhry@ti.com>
Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Manorit Chawdhry <m-chawdhry@ti.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The synchronous ahash fallback code paths are broken because the
ahash_restore_req assumes there is always a state object. Fix this
by removing the state from ahash_restore_req and localising it to
the asynchronous completion callback.
Also add a missing synchronous finish call in ahash_def_digest_finish.
Fixes: f2ffe5a9183d ("crypto: hash - Add request chaining API")
Fixes: 439963cdc3aa ("crypto: ahash - Add virtual address support")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use nth_page instead of adding n to the page pointer.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The test on PAGE_SIZE - offset in shash_ahash_digest can underflow,
leading to execution of the fast path even if the data cannot be
mapped into a single page.
Fix this by splitting the test into four cases:
1) nbytes > sg->length: More than one SG entry, slow path.
2) !IS_ENABLED(CONFIG_HIGHMEM): fast path.
3) nbytes > (unsigned int)PAGE_SIZE - offset: Two highmem pages, slow path.
4) Highmem fast path.
Fixes: 5f7082ed4f48 ("crypto: hash - Export shash through hash")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds virtual address support to ahash. Virtual addresses
were previously only supported through shash. The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.
The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining. Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This adds request chaining to the ahash interface. Request chaining
allows multiple requests to be submitted in one shot. An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN. If this bit is not set, the API will break
up chained requests and submit them one-by-one.
A new err field is added to struct crypto_async_request to record
the return value for each individual request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove hard-coded strings by using the str_yes_no() helper function.
Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Due to the removal of the Niagara2 SPU driver, crypto_hash_walk_first(),
crypto_hash_walk_done(), crypto_hash_walk_last(), and struct
crypto_hash_walk are now only used in crypto/ahash.c. Therefore, make
them all private to crypto/ahash.c. I.e. un-export the two functions
that were exported, make the functions static, and move the struct
definition to the .c file. As part of this, move the functions to
earlier in the file to avoid needing to add forward declarations.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.
Covering each of these points in detail:
1. Feature is not being used
Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.
It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.
Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.
There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.
2. CONFIG_CRYPTO_STATS significantly reduces performance
Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with a large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.
It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.
It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.
3. CONFIG_CRYPTO_STATS is a large maintenance burden
There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This reverts commit 2beb81fbf0c01a62515a1bcef326168494ee2bd0.
While removing CONFIG_CRYPTO_STATS is a worthy goal, this also
removed unrelated infrastructure such as crypto_comp_alg_common.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove support for the "Crypto usage statistics" feature
(CONFIG_CRYPTO_STATS). This feature does not appear to have ever been
used, and it is harmful because it significantly reduces performance and
is a large maintenance burden.
Covering each of these points in detail:
1. Feature is not being used
Since these generic crypto statistics are only readable using netlink,
it's fairly straightforward to look for programs that use them. I'm
unable to find any evidence that any such programs exist. For example,
Debian Code Search returns no hits except the kernel header and kernel
code itself and translations of the kernel header:
https://codesearch.debian.net/search?q=CRYPTOCFGA_STAT&literal=1&perpkg=1
The patch series that added this feature in 2018
(https://lore.kernel.org/linux-crypto/1537351855-16618-1-git-send-email-clabbe@baylibre.com/)
said "The goal is to have an ifconfig for crypto device." This doesn't
appear to have happened.
It's not clear that there is real demand for crypto statistics. Just
because the kernel provides other types of statistics such as I/O and
networking statistics and some people find those useful does not mean
that crypto statistics are useful too.
Further evidence that programs are not using CONFIG_CRYPTO_STATS is that
it was able to be disabled in RHEL and Fedora as a bug fix
(https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/2947).
Even further evidence comes from the fact that there are and have been
bugs in how the stats work, but they were never reported. For example,
before Linux v6.7 hash stats were double-counted in most cases.
There has also never been any documentation for this feature, so it
might be hard to use even if someone wanted to.
2. CONFIG_CRYPTO_STATS significantly reduces performance
Enabling CONFIG_CRYPTO_STATS significantly reduces the performance of
the crypto API, even if no program ever retrieves the statistics. This
primarily affects systems with large number of CPUs. For example,
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2039576 reported
that Lustre client encryption performance improved from 21.7GB/s to
48.2GB/s by disabling CONFIG_CRYPTO_STATS.
It can be argued that this means that CONFIG_CRYPTO_STATS should be
optimized with per-cpu counters similar to many of the networking
counters. But no one has done this in 5+ years. This is consistent
with the fact that the feature appears to be unused, so there seems to
be little interest in improving it as opposed to just disabling it.
It can be argued that because CONFIG_CRYPTO_STATS is off by default,
performance doesn't matter. But Linux distros tend to error on the side
of enabling options. The option is enabled in Ubuntu and Arch Linux,
and until recently was enabled in RHEL and Fedora (see above). So, even
just having the option available is harmful to users.
3. CONFIG_CRYPTO_STATS is a large maintenance burden
There are over 1000 lines of code associated with CONFIG_CRYPTO_STATS,
spread among 32 files. It significantly complicates much of the
implementation of the crypto API. After the initial submission, many
fixes and refactorings have consumed effort of multiple people to keep
this feature "working". We should be spending this effort elsewhere.
Cc: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since crypto_hash_alg_has_setkey() is only called from ahash.c itself,
make it a static function.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The cloned child of ahash that uses shash under the hood should use
shash helpers (like crypto_shash_setkey()).
The following panic may be observed on TCP-AO selftests:
> ==================================================================
> BUG: KASAN: wild-memory-access in crypto_mod_get+0x1b/0x60
> Write of size 4 at addr 5d5be0ff5c415e14 by task connect_ipv4/1397
>
> CPU: 0 PID: 1397 Comm: connect_ipv4 Tainted: G W 6.6.0+ #47
> Call Trace:
> <TASK>
> dump_stack_lvl+0x46/0x70
> kasan_report+0xc3/0xf0
> kasan_check_range+0xec/0x190
> crypto_mod_get+0x1b/0x60
> crypto_spawn_alg+0x53/0x140
> crypto_spawn_tfm2+0x13/0x60
> hmac_init_tfm+0x25/0x60
> crypto_ahash_setkey+0x8b/0x100
> tcp_ao_add_cmd+0xe7a/0x1120
> do_tcp_setsockopt+0x5ed/0x12a0
> do_sock_setsockopt+0x82/0x100
> __sys_setsockopt+0xe9/0x160
> __x64_sys_setsockopt+0x60/0x70
> do_syscall_64+0x3c/0xe0
> entry_SYSCALL_64_after_hwframe+0x46/0x4e
> ==================================================================
> general protection fault, probably for non-canonical address 0x5d5be0ff5c415e14: 0000 [#1] PREEMPT SMP KASAN
> CPU: 0 PID: 1397 Comm: connect_ipv4 Tainted: G B W 6.6.0+ #47
> Call Trace:
> <TASK>
> ? die_addr+0x3c/0xa0
> ? exc_general_protection+0x144/0x210
> ? asm_exc_general_protection+0x22/0x30
> ? add_taint+0x26/0x90
> ? crypto_mod_get+0x20/0x60
> ? crypto_mod_get+0x1b/0x60
> ? ahash_def_finup_done1+0x58/0x80
> crypto_spawn_alg+0x53/0x140
> crypto_spawn_tfm2+0x13/0x60
> hmac_init_tfm+0x25/0x60
> crypto_ahash_setkey+0x8b/0x100
> tcp_ao_add_cmd+0xe7a/0x1120
> do_tcp_setsockopt+0x5ed/0x12a0
> do_sock_setsockopt+0x82/0x100
> __sys_setsockopt+0xe9/0x160
> __x64_sys_setsockopt+0x60/0x70
> do_syscall_64+0x3c/0xe0
> entry_SYSCALL_64_after_hwframe+0x46/0x4e
> </TASK>
> RIP: 0010:crypto_mod_get+0x20/0x60
Make sure that the child/clone has using_shash set when parent is
an shash user.
Fixes: 2f1f34c1bf7b ("crypto: ahash - optimize performance when wrapping shash")
Cc: David Ahern <dsahern@kernel.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dmitry Safonov <0x7f454c46@gmail.com>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Francesco Ruggeri <fruggeri05@gmail.com>
To: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Salam Noureddine <noureddine@arista.com>
Cc: netdev@vger.kernel.org
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Dmitry Safonov <dima@arista.com>
Reviewed-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The "ahash" API provides access to both CPU-based and hardware offload-
based implementations of hash algorithms. Typically the former are
implemented as "shash" algorithms under the hood, while the latter are
implemented as "ahash" algorithms. The "ahash" API provides access to
both. Various kernel subsystems use the ahash API because they want to
support hashing hardware offload without using a separate API for it.
Yet, the common case is that a crypto accelerator is not actually being
used, and ahash is just wrapping a CPU-based shash algorithm.
This patch optimizes the ahash API for that common case by eliminating
the extra indirect call for each ahash operation on top of shash.
It also fixes the double-counting of crypto stats in this scenario
(though CONFIG_CRYPTO_STATS should *not* be enabled by anyone interested
in performance anyway...), and it eliminates redundant checking of
CRYPTO_TFM_NEED_KEY. As a bonus, it also shrinks struct crypto_ahash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since the previous patch made crypto_shash_type visible to ahash.c,
change checks for '->cra_type != &crypto_ahash_type' to '->cra_type ==
&crypto_shash_type'. This makes more sense and avoids having to
forward-declare crypto_ahash_type. The result is still the same, since
the type is either shash or ahash here.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The functions that are involved in implementing the ahash API on top of
an shash algorithm belong better in ahash.c, not in shash.c where they
currently are. Move them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Improve the file comment for crypto/ahash.c.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
struct ahash_request_priv is unused, so remove it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently, the ahash API checks the alignment of all key and result
buffers against the algorithm's declared alignmask, and for any
unaligned buffers it falls back to manually aligned temporary buffers.
This is virtually useless, however. First, since it does not apply to
the message, its effect is much more limited than e.g. is the case for
the alignmask for "skcipher". Second, the key and result buffers are
given as virtual addresses and cannot (in general) be DMA'ed into, so
drivers end up having to copy to/from them in software anyway. As a
result it's easy to use memcpy() or the unaligned access helpers.
The crypto_hash_walk_*() helper functions do use the alignmask to align
the message. But with one exception those are only used for shash
algorithms being exposed via the ahash API, not for native ahashes, and
aligning the message is not required in this case, especially now that
alignmask support has been removed from shash. The exception is the
n2_core driver, which doesn't set an alignmask.
In any case, no ahash algorithms actually set a nonzero alignmask
anymore. Therefore, remove support for it from ahash. The benefit is
that all the code to handle "misaligned" buffers in the ahash API goes
away, reducing the overhead of the ahash API.
This follows the same change that was made to shash.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the macro CRYPTO_ALG_TYPE_AHASH_MASK out of linux/crypto.h
and into crypto/ahash.c so that it's not visible to users of the
Crypto API.
Also remove the unused CRYPTO_ALG_TYPE_HASH_MASK macro.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move the crypto_ahash_alg helper into include/crypto/internal so
that drivers can use it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As ahash drivers may need to use fallbacks, their state size
is thus variable. Deal with this by making it an attribute
of crypto_ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Checking the config via ifdef incorrectly compiles out the report
functions when CRYPTO_USER is set to =m. Fix it by using IS_ENABLED()
instead.
Fixes: c0f9e01dd266 ("crypto: api - Check CRYPTO_USER instead of NET for report")
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds the helpers crypto_clone_ahash and crypto_clone_shash.
They are the hash-specific counterparts of crypto_clone_tfm.
This allows code paths that cannot otherwise allocate a hash tfm
object to do so. Once a new tfm has been obtained its key could
then be changed without impacting other users.
Note that only algorithms that implement clone_tfm can be cloned.
However, all keyless hashes can be cloned by simply reusing the
tfm object.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Simon Horman <simon.horman@corigine.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The report function is currently conditionalised on CONFIG_NET.
As it's only used by CONFIG_CRYPTO_USER, conditionalising on that
instead of CONFIG_NET makes more sense.
This gets rid of a rarely used code-path.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Move all stat code specific to hash into the hash code.
While we're at it, change the stats so that bytes and counts
are always incremented even in case of error. This allows the
reference counting to be removed as we can now increment the
counters prior to the operation.
After the operation we simply increase the error count if necessary.
This is safe as errors can only occur synchronously (or rather,
the existing code already ignored asynchronous errors which are
only visible to the callback function).
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch does the final flag day conversion of all completion
functions which are now all contained in the Crypto API.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Use the crypto_request_complete helper instead of calling the
completion function directly.
This patch also removes the voodoo programming previously used
for unaligned ahash operations and replaces it with a sub-request.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
kmap_atomic() is used to create short-lived mappings of pages that may
not be accessible via the kernel direct map. This is only needed on
32-bit architectures that implement CONFIG_HIGHMEM, but it can be used
on 64-bit other architectures too, where the returned mapping is simply
the kernel direct address of the page.
However, kmap_atomic() does not support migration on CONFIG_HIGHMEM
configurations, due to the use of per-CPU kmap slots, and so it disables
preemption on all architectures, not just the 32-bit ones. This implies
that all scatterwalk based crypto routines essentially execute with
preemption disabled all the time, which is less than ideal.
So let's switch scatterwalk_map/_unmap and the shash/ahash routines to
kmap_local() instead, which serves a similar purpose, but without the
resulting impact on preemption on architectures that have no need for
CONFIG_HIGHMEM.
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "Elliott, Robert (Servers)" <elliott@hpe.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds the type-safe init_tfm/exit_tfm functions to the
ahash interface. This is meant to replace the unsafe cra_init and
cra_exit interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Revert "crypto: hash - Add real ahash walk interface"
This reverts commit 75ecb231ff45b54afa9f4ec9137965c3c00868f4.
The callers of the functions in this commit were removed in ab8085c130ed
Remove these unused calls.
Fixes: ab8085c130ed ("crypto: x86 - remove SHA multibuffer routines and mcryptd")
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
As said by Linus:
A symmetric naming is only helpful if it implies symmetries in use.
Otherwise it's actively misleading.
In "kzalloc()", the z is meaningful and an important part of what the
caller wants.
In "kzfree()", the z is actively detrimental, because maybe in the
future we really _might_ want to use that "memfill(0xdeadbeef)" or
something. The "zero" part of the interface isn't even _relevant_.
The main reason that kzfree() exists is to clear sensitive information
that should not be leaked to other future users of the same memory
objects.
Rename kzfree() to kfree_sensitive() to follow the example of the recently
added kvfree_sensitive() and make the intention of the API more explicit.
In addition, memzero_explicit() is used to clear the memory to make sure
that it won't get optimized away by the compiler.
The renaming is done by using the command sequence:
git grep -w --name-only kzfree |\
xargs sed -i 's/kzfree/kfree_sensitive/'
followed by some editing of the kfree_sensitive() kerneldoc and adding
a kzfree backward compatibility macro in slab.h.
[akpm@linux-foundation.org: fs/crypto/inline_crypt.c needs linux/slab.h]
[akpm@linux-foundation.org: fix fs/crypto/inline_crypt.c some more]
Suggested-by: Joe Perches <joe@perches.com>
Signed-off-by: Waiman Long <longman@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Cc: James Morris <jmorris@namei.org>
Cc: "Serge E. Hallyn" <serge@hallyn.com>
Cc: Joe Perches <joe@perches.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: "Jason A . Donenfeld" <Jason@zx2c4.com>
Link: http://lkml.kernel.org/r/20200616154311.12314-3-longman@redhat.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
All instances need to have a ->free() method, but people could forget to
set it and then not notice if the instance is never unregistered. To
help detect this bug earlier, don't allow an instance without a ->free()
method to be registered, and complain loudly if someone tries to do it.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that all templates provide a ->create() method which creates an
instance, installs a strongly-typed ->free() method directly to it, and
registers it, the older ->alloc() and ->free() methods in
'struct crypto_template' are no longer used. Remove them.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add support to shash and ahash for the new way of freeing instances
(already used for skcipher, aead, and akcipher) where a ->free() method
is installed to the instance struct itself. These methods are more
strongly-typed than crypto_template::free(), which they replace.
This will allow removing support for the old way of freeing instances.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that all the templates that need ahash spawns have been converted to
use crypto_grab_ahash() rather than look up the algorithm directly,
crypto_ahash_type is no longer used outside of ahash.c. Make it static.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Remove lots of helper functions that were previously used for
instantiating crypto templates, but are now unused:
- crypto_get_attr_alg() and similar functions looked up an inner
algorithm directly from a template parameter. These were replaced
with getting the algorithm's name, then calling crypto_grab_*().
- crypto_init_spawn2() and similar functions initialized a spawn, given
an algorithm. Similarly, these were replaced with crypto_grab_*().
- crypto_alloc_instance() and similar functions allocated an instance
with a single spawn, given the inner algorithm. These aren't useful
anymore since crypto_grab_*() need the instance allocated first.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently, ahash spawns are initialized by using ahash_attr_alg() or
crypto_find_alg() to look up the ahash algorithm, then calling
crypto_init_ahash_spawn().
This is different from how skcipher, aead, and akcipher spawns are
initialized (they use crypto_grab_*()), and for no good reason. This
difference introduces unnecessary complexity.
The crypto_grab_*() functions used to have some problems, like not
holding a reference to the algorithm and requiring the caller to
initialize spawn->base.inst. But those problems are fixed now.
So, let's introduce crypto_grab_ahash() so that we can convert all
templates to the same way of initializing their spawns.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some of the algorithm unregistration functions return -ENOENT when asked
to unregister a non-registered algorithm, while others always return 0
or always return void. But no users check the return value, except for
two of the bulk unregistration functions which print a message on error
but still always return 0 to their caller, and crypto_del_alg() which
calls crypto_unregister_instance() which always returns 0.
Since unregistering a non-registered algorithm is always a kernel bug
but there isn't anything callers should do to handle this situation at
runtime, let's simplify things by making all the unregistration
functions return void, and moving the error message into
crypto_unregister_alg() and upgrading it to a WARN().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Based on 1 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license as published by
the free software foundation either version 2 of the license or at
your option any later version
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-or-later
has been chosen to replace the boilerplate/reference in 3029 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
"michael_mic", fail the improved hash tests because they sometimes
produce the wrong digest. The bug is that in the case where a
scatterlist element crosses pages, not all the data is actually hashed
because the scatterlist walk terminates too early. This happens because
the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
of bytes remaining in the page, then later interpreted as the number of
bytes remaining in the scatterlist element. Fix it.
Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some algorithms have a ->setkey() method that is not atomic, in the
sense that setting a key can fail after changes were already made to the
tfm context. In this case, if a key was already set the tfm can end up
in a state that corresponds to neither the old key nor the new key.
It's not feasible to make all ->setkey() methods atomic, especially ones
that have to key multiple sub-tfms. Therefore, make the crypto API set
CRYPTO_TFM_NEED_KEY if ->setkey() fails and the algorithm requires a
key, to prevent the tfm from being used until a new key is set.
Note: we can't set CRYPTO_TFM_NEED_KEY for OPTIONAL_KEY algorithms, so
->setkey() for those must nevertheless be atomic. That's fine for now
since only the crc32 and crc32c algorithms set OPTIONAL_KEY, and it's
not intended that OPTIONAL_KEY be used much.
[Cc stable mainly because when introducing the NEED_KEY flag I changed
AF_ALG to rely on it; and unlike in-kernel crypto API users, AF_ALG
previously didn't have this problem. So these "incompletely keyed"
states became theoretically accessible via AF_ALG -- though, the
opportunities for causing real mischief seem pretty limited.]
Fixes: 9fa68f620041 ("crypto: hash - prevent using keyed hashes without setting key")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
All crypto_stats functions use the struct xxx_request for feeding stats,
but in some case this structure could already be freed.
For fixing this, the needed parameters (len and alg) will be stored
before the request being executed.
Fixes: cac5818c25d0 ("crypto: user - Implement a generic crypto statistics")
Reported-by: syzbot <syzbot+6939a606a5305e9e9799@syzkaller.appspotmail.com>
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There have been a pretty ridiculous number of issues with initializing
the report structures that are copied to userspace by NETLINK_CRYPTO.
Commit 4473710df1f8 ("crypto: user - Prepare for CRYPTO_MAX_ALG_NAME
expansion") replaced some strncpy()s with strlcpy()s, thereby
introducing information leaks. Later two other people tried to replace
other strncpy()s with strlcpy() too, which would have introduced even
more information leaks:
- https://lore.kernel.org/patchwork/patch/954991/
- https://patchwork.kernel.org/patch/10434351/
Commit cac5818c25d0 ("crypto: user - Implement a generic crypto
statistics") also uses the buggy strlcpy() approach and therefore leaks
uninitialized memory to userspace. A fix was proposed, but it was
originally incomplete.
Seeing as how apparently no one can get this right with the current
approach, change all the reporting functions to:
- Start by memsetting the report structure to 0. This guarantees it's
always initialized, regardless of what happens later.
- Initialize all strings using strscpy(). This is safe after the
memset, ensures null termination of long strings, avoids unnecessary
work, and avoids the -Wstringop-truncation warnings from gcc.
- Use sizeof(var) instead of sizeof(type). This is more robust against
copy+paste errors.
For simplicity, also reuse the -EMSGSIZE return value from nla_put().
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch implement a generic way to get statistics about all crypto
usages.
Signed-off-by: Corentin Labbe <clabbe@baylibre.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
In the quest to remove all stack VLA usage from the kernel[1], this
removes the VLAs in SHASH_DESC_ON_STACK (via crypto_shash_descsize())
by using the maximum allowable size (which is now more clearly captured
in a macro), along with a few other cases. Similar limits are turned into
macros as well.
A review of existing sizes shows that SHA512_DIGEST_SIZE (64) is the
largest digest size and that sizeof(struct sha3_state) (360) is the
largest descriptor size. The corresponding maximums are reduced.
[1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When we have an unaligned SG list entry where there is no leftover
aligned data, the hash walk code will incorrectly return zero as if
the entire SG list has been processed.
This patch fixes it by moving onto the next page instead.
Reported-by: Eli Cooper <elicooper@gmx.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Export and import are mandatory in async hash. As drivers were
rewritten, drop empty wrappers and correct init of ahash transformation.
Signed-off-by: Kamil Konieczny <k.konieczny@partner.samsung.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Currently, almost none of the keyed hash algorithms check whether a key
has been set before proceeding. Some algorithms are okay with this and
will effectively just use a key of all 0's or some other bogus default.
However, others will severely break, as demonstrated using
"hmac(sha3-512-generic)", the unkeyed use of which causes a kernel crash
via a (potentially exploitable) stack buffer overflow.
A while ago, this problem was solved for AF_ALG by pairing each hash
transform with a 'has_key' bool. However, there are still other places
in the kernel where userspace can specify an arbitrary hash algorithm by
name, and the kernel uses it as unkeyed hash without checking whether it
is really unkeyed. Examples of this include:
- KEYCTL_DH_COMPUTE, via the KDF extension
- dm-verity
- dm-crypt, via the ESSIV support
- dm-integrity, via the "internal hash" mode with no key given
- drbd (Distributed Replicated Block Device)
This bug is especially bad for KEYCTL_DH_COMPUTE as that requires no
privileges to call.
Fix the bug for all users by adding a flag CRYPTO_TFM_NEED_KEY to the
->crt_flags of each hash transform that indicates whether the transform
still needs to be keyed or not. Then, make the hash init, import, and
digest functions return -ENOKEY if the key is still needed.
The new flag also replaces the 'has_key' bool which algif_hash was
previously using, thereby simplifying the algif_hash implementation.
Reported-by: syzbot <syzkaller@googlegroups.com>
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Templates that use an shash spawn can use crypto_shash_alg_has_setkey()
to determine whether the underlying algorithm requires a key or not.
But there was no corresponding function for ahash spawns. Add it.
Note that the new function actually has to support both shash and ahash
algorithms, since the ahash API can be used with either.
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that -EBUSY return code only indicates backlog queueing
we can safely remove the now redundant check for the
CRYPTO_TFM_REQ_MAY_BACKLOG flag when -EBUSY is returned.
Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
There are already helpers to (un)register multiple normal
and AEAD algos. Add one for ahashes too.
Signed-off-by: Lars Persson <larper@axis.com>
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The ahash API modifies the request's callback function in order
to clean up after itself in some corner cases (unaligned final
and missing finup).
When the request is complete ahash will restore the original
callback and everything is fine. However, when the request gets
an EBUSY on a full queue, an EINPROGRESS callback is made while
the request is still ongoing.
In this case the ahash API will incorrectly call its own callback.
This patch fixes the problem by creating a temporary request
object on the stack which is used to relay EINPROGRESS back to
the original completion function.
This patch also adds code to preserve the original flags value.
Fixes: ab6bf4e5e5e4 ("crypto: hash - Fix the pointer voodoo in...")
Cc: <stable@vger.kernel.org>
Reported-by: Sabrina Dubroca <sd@queasysnail.net>
Tested-by: Sabrina Dubroca <sd@queasysnail.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Continuing from this commit: 52f5684c8e1e
("kernel: use macros from compiler.h instead of __attribute__((...))")
I submitted 4 total patches. They are part of task I've taken up to
increase compiler portability in the kernel. I've cleaned up the
subsystems under /kernel /mm /block and /security, this patch targets
/crypto.
There is <linux/compiler.h> which provides macros for various gcc specific
constructs. Eg: __weak for __attribute__((weak)). I've cleaned all
instances of gcc specific attributes with the right macros for the crypto
subsystem.
I had to make one additional change into compiler-gcc.h for the case when
one wants to use this: __attribute__((aligned) and not specify an alignment
factor. From the gcc docs, this will result in the largest alignment for
that data type on the target machine so I've named the macro
__aligned_largest. Please advise if another name is more appropriate.
Signed-off-by: Gideon Israel Dsouza <gidisrael@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The function crypto_ahash_extsize did not include padding when
computing the tfm context size. This patch fixes this by using
the generic crypto_alg_extsize helper.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The crypto hash walk code is broken when supplied with an offset
greater than or equal to PAGE_SIZE. This patch fixes it by adjusting
walk->pg and walk->offset when this happens.
Cc: <stable@vger.kernel.org>
Reported-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch removes all traces of the crypto_hash interface, now
that everyone has switched over to shash or ahash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds the helper crypto_has_ahash which should replace
crypto_has_hash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds a way for ahash users to determine whether a key
is required by a crypto_ahash transform.
Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Unlike shash algorithms, ahash drivers must implement export
and import as their descriptors may contain hardware state and
cannot be exported as is. Unfortunately some ahash drivers did
not provide them and end up causing crashes with algif_hash.
This patch adds a check to prevent these drivers from registering
ahash algorithms until they are fixed.
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Modify crypto drivers to use the generic SG helper since
both of them are equivalent and the one from crypto is redundant.
See also:
468577abe37ff7b453a9ac613e0ea155349203ae reverted in
b2ab4a57b018aafbba35bff088218f5cc3d2142e
Signed-off-by: Cristian Stoica <cristian.stoica@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Fixed style error identified by checkpatch.
WARNING: Missing a blank line after declarations
+ unsigned int unaligned = alignmask + 1 - (offset & alignmask);
+ if (nbytes > unaligned)
Signed-off-by: Joshua I. James <joshua@cybercrimetech.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
For the special case when we have a null input string, we want
to initialize the entry len to 0 for the hash/ahash walk, so
cyrpto_hash_walk_last will return the correct result indicating
that we have completed the scatter list walk. Otherwise we may
keep walking the sg list and access bogus memory address.
Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Although the existing hash walk interface has already been used
by a number of ahash crypto drivers, it turns out that none of
them were really asynchronous. They were all essentially polling
for completion.
That's why nobody has noticed until now that the walk interface
couldn't work with a real asynchronous driver since the memory
is mapped using kmap_atomic.
As we now have a use-case for a real ahash implementation on x86,
this patch creates a minimal ahash walk interface. Basically it
just calls kmap instead of kmap_atomic and does away with the
crypto_yield call. Real ahash crypto drivers don't need to yield
since by definition they won't be hogging the CPU.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The ahash_def_finup() can make use of the request save/restore functions,
thus make it so. This simplifies the code a little and unifies the code
paths.
Note that the same remark about free()ing the req->priv applies here, the
req->priv can only be free()'d after the original request was restored.
Finally, squash a bug in the invocation of completion in the ASYNC path.
In both ahash_def_finup_done{1,2}, the function areq->base.complete(X, err);
was called with X=areq->base.data . This is incorrect , as X=&areq->base
is the correct value. By analysis of the data structures, we see the areq is
of type 'struct ahash_request' , areq->base is of type 'struct crypto_async_request'
and areq->base.completion is of type crypto_completion_t, which is defined in
include/linux/crypto.h as:
typedef void (*crypto_completion_t)(struct crypto_async_request *req, int err);
This is one lead that the X should be &areq->base . Next up, we can inspect
other code which calls the completion callback to give us kind-of statistical
idea of how this callback is used. We can try:
$ git grep base\.complete\( drivers/crypto/
Finally, by inspecting ahash_request_set_callback() implementation defined
in include/crypto/hash.h , we observe that the .data entry of 'struct
crypto_async_request' is intended for arbitrary data, not for completion
argument.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The functions to save original request within a newly adjusted request
and it's counterpart to restore the original request can be re-used by
more code in the crypto/ahash.c file. Pull these functions out from the
code so they're available.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Add documentation for the pointer voodoo that is happening in crypto/ahash.c
in ahash_op_unaligned(). This code is quite confusing, so add a beefy chunk
of documentation.
Moreover, make sure the mangled request is completely restored after finishing
this unaligned operation. This means restoring all of .result, .base.data
and .base.complete .
Also, remove the crypto_completion_t complete = ... line present in the
ahash_op_unaligned_done() function. This type actually declares a function
pointer, which is very confusing.
Finally, yet very important nonetheless, make sure the req->priv is free()'d
only after the original request is restored in ahash_op_unaligned_done().
The req->priv data must not be free()'d before that in ahash_op_unaligned_finish(),
since we would be accessing previously free()'d data in ahash_op_unaligned_done()
and cause corruption.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Shawn Guo <shawn.guo@linaro.org>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When finishing the ahash request, the ahash_op_unaligned_done() will
call complete() on the request. Yet, this will not call the correct
complete callback. The correct complete callback was previously stored
in the requests' private data, as seen in ahash_op_unaligned(). This
patch restores the correct complete callback and .data field of the
request before calling complete() on it.
Signed-off-by: Marek Vasut <marex@denx.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Fabio Estevam <fabio.estevam@freescale.com>
Cc: Shawn Guo <shawn.guo@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Three errors resulting in kernel memory disclosure:
1/ The structures used for the netlink based crypto algorithm report API
are located on the stack. As snprintf() does not fill the remainder of
the buffer with null bytes, those stack bytes will be disclosed to users
of the API. Switch to strncpy() to fix this.
2/ crypto_report_one() does not initialize all field of struct
crypto_user_alg. Fix this to fix the heap info leak.
3/ For the module name we should copy only as many bytes as
module_name() returns -- not as much as the destination buffer could
hold. But the current code does not and therefore copies random data
from behind the end of the module name, as the module name is always
shorter than CRYPTO_MAX_ALG_NAME.
Also switch to use strncpy() to copy the algorithm's name and
driver_name. They are strings, after all.
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
These macros contain a hidden goto, and are thus extremely error
prone and make code hard to audit.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Signed-off-by: Cong Wang <amwang@redhat.com>
|
|
The report functions use NLA_PUT so we need to ensure that NET
is enabled.
Reported-by: Luis Henriques <henrix@camandro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
If a scatterwalk chain contains an entry with an unaligned offset then
hash_walk_next() will cut off the next step at the next alignment point.
However, if the entry ends before the next alignment point then we a loop,
which leads to a kernel oops.
Fix this by checking whether the next aligment point is before the end of the
current entry.
Signed-off-by: Szilveszter Ördög <slipszi@gmail.com>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The correct way to calculate the start of the aligned part of an
unaligned buffer is:
offset = ALIGN(offset, alignmask + 1);
However, crypto_hash_walk_done() has:
offset += alignmask - 1;
offset = ALIGN(offset, alignmask + 1);
which actually skips a whole block unless offset % (alignmask + 1) == 1.
This patch fixes the problem.
Signed-off-by: Szilveszter Ördög <slipszi@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
ahash_op_unaligned() and ahash_def_finup() allocate memory atomically,
regardless whether the request can sleep or not. This patch changes
this to use GFP_KERNEL if the request can sleep.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
When the alignment check was made unconditional for ahash we
may end up crashing on shash algorithms because we're always
calling alg->setkey instead of tfm->setkey.
This patch fixes it.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch exports the finup operation where available and adds
a default finup operation for ahash. The operations final, finup
and digest also will now deal with unaligned result pointers by
copying it. Finally export/import operations are will now be
exported too.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
We currently use GFP_ATOMIC in the unaligned setkey function
to allocate the temporary aligned buffer. Since setkey must
be called in a sleepable context, we can use GFP_KERNEL instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Some unaligned buffers on the stack weren't zapped properly which
may cause secret data to be leaked. This patch fixes them by doing
a zero memset.
It is also possible for us to place random kernel stack contents
in the digest buffer if a digest operation fails. This is fixed
by only copying if the operation succeeded.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Now that all ahash implementations have been converted to the new
ahash type, we can remove old_ahash_alg and its associated support.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds support for creating ahash instances and using
ahash as spawns.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch converts crypto_ahash to the new style. The old ahash
algorithm type is retained until the existing ahash implementations
are also converted. All ahash users will automatically get the
new crypto_ahash type.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
A quirk that we've always supported is having an sg entry that's
bigger than a page, or more generally an sg entry that crosses
page boundaries. Even though it would be better to explicitly have
to sg entries for this, we need to support it for the existing users,
in particular, IPsec.
The new ahash sg walking code did try to handle this, but there was
a bug where we didn't increment the page so kept on walking on the
first page over an dover again.
This patch fixes it.
Tested-by: Martin Willi <martin@strongswan.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
crypto_ahash_show changed to use cra_ahash for digestsize reference.
Signed-off-by: Lee Nipper <lee.nipper@freescale.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
Since most cryptographic hash algorithms have no keys, this patch
makes the setkey function optional for ahash and shash.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch allows shash algorithms to be used through the old hash
interface. This is a transitional measure so we can convert the
underlying algorithms to shash before converting the users across.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
It is often useful to save the partial state of a hash function
so that it can be used as a base for two or more computations.
The most prominent example is HMAC where all hashes start from
a base determined by the key. Having an import/export interface
means that we only have to compute that base once rather than
for each message.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds the walking helpers for hash algorithms akin to
those of block ciphers. This is a necessary step before we can
reimplement existing hash algorithms using the new ahash interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The base field in ahash_tfm appears to have been cut-n-pasted from
ablkcipher. It isn't needed here at all. Similarly, the info field
in ahash_request also appears to have originated from its cipher
counter-part and is vestigial.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
The digest size check on hash algorithms is incorrect. It's
perfectly valid for hash algorithms to have a digest length
longer than their block size. For example crc32c has a block
size of 1 and a digest size of 4. Rather than having it lie
about its block size, this patch fixes the checks to do what
they really should which is to bound the digest size so that
code placing the digest on the stack continue to work.
HMAC however still needs to check this as it's only defined
for such algorithms.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|
|
This patch adds asynchronous hash and digest support.
Signed-off-by: Loc Ho <lho@amcc.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
|