[OpenWrt-Devel] [PATCH] mvebu: Add SafeXcel crypto engine (Armada 37xx)

John Crispin john at phrozen.org
Mon Aug 13 10:56:44 EDT 2018



On 13/08/18 02:53, Marek Behún wrote:
> These are backported patches from mainline kernel needed to support
> Inside Secure's SafeXcel EIP97 crypto engine on Armada 37xx
> (EspressoBin).
>
> We also define a kernel package for crypto_safexcel in
> target/mvebu/modules.mk.
>
> Signed-off-by: Marek Behun <marek.behun at nic.cz>

Hi Marek,
we just had a lengthy discussion on IRC with 4-5 folks. We think that 
adding 40 patches to the tree to make hw crypto work on a single board 
that we support is pretty overkill, even if they are backports. recently 
we had to disable hw crypto on a different mvebu device as it broke 
ipsec and another backport broke RTC support on mvebu. we believe that 
this can all lead to bug reports that we cannot fix as no one actually 
has this HW. we'd prefer for this to arrive in tree with a kernel 
update, rather than a backport. Sorry ...
     John

> ---
>   target/linux/mvebu/modules.mk                      |  25 +
>   ...ide-secure-remove-null-check-before-kfree.patch |  33 +
>   ...de-secure-do-not-use-areq-result-for-part.patch |  63 ++
>   ...pto-inside-secure-remove-extra-empty-line.patch |  28 +
>   ...rypto-inside-secure-fix-typo-in-a-comment.patch |  29 +
>   ...rypto-inside-secure-remove-useless-memset.patch |  30 +
>   ...de-secure-refrain-from-unneeded-invalidat.patch |  91 +++
>   ...de-secure-EBUSY-is-not-an-error-on-async-.patch |  35 +
>   ...de-secure-move-cipher-crypto-mode-to-requ.patch |  76 ++
>   ...de-secure-remove-unused-parameter-in-inva.patch |  74 ++
>   ...de-secure-move-request-dequeueing-into-a-.patch | 204 +++++
>   ...de-secure-use-threaded-IRQs-for-result-ha.patch | 136 ++++
>   ...nside-secure-dequeue-all-requests-at-once.patch | 179 +++++
>   ...ypto-inside-secure-increase-the-ring-size.patch |  37 +
>   ...de-secure-acknowledge-the-result-requests.patch |  62 ++
>   ...de-secure-handle-more-result-requests-whe.patch |  70 ++
>   ...de-secure-retry-to-proceed-the-request-la.patch | 103 +++
>   .../616-crypto-inside-secure-EIP97-support.patch   | 841 +++++++++++++++++++++
>   ...de-secure-make-function-safexcel_try_push.patch |  38 +
>   ...de-secure-do-not-overwrite-the-threshold-.patch |  40 +
>   ...de-secure-keep-the-requests-push-pop-sync.patch | 136 ++++
>   ...de-secure-unmap-the-result-in-the-hash-se.patch |  42 +
>   ...de-secure-move-hash-result-dma-mapping-to.patch | 115 +++
>   ...de-secure-move-cache-result-dma-mapping-t.patch | 152 ++++
>   ...de-secure-fix-missing-unlock-on-error-in-.patch |  36 +
>   ...nside-secure-improve-clock-initialization.patch |  48 ++
>   ...de-secure-fix-clock-resource-by-adding-a-.patch | 146 ++++
>   ...de-secure-move-the-digest-to-the-request-.patch | 161 ++++
>   ...de-secure-fix-typo-s-allways-always-in-a-.patch |  45 ++
>   ...side-secure-fix-a-typo-in-a-register-name.patch |  45 ++
>   ...inside-secure-improve-the-send-error-path.patch |  50 ++
>   ...de-secure-do-not-access-buffers-mapped-to.patch |  46 ++
>   ...-inside-secure-improve-the-skcipher-token.patch |  36 +
>   ...de-secure-the-context-ipad-opad-should-us.patch |  42 +
>   ...-crypto-inside-secure-hmac-sha256-support.patch | 174 +++++
>   ...-crypto-inside-secure-hmac-sha224-support.patch | 110 +++
>   ...dts-marvell-armada-37xx-add-a-crypto-node.patch |  42 +
>   37 files changed, 3620 insertions(+)
>   create mode 100644 target/linux/mvebu/modules.mk
>   create mode 100644 target/linux/mvebu/patches-4.14/600-crypto-inside-secure-remove-null-check-before-kfree.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/601-crypto-inside-secure-do-not-use-areq-result-for-part.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/602-crypto-inside-secure-remove-extra-empty-line.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/603-crypto-inside-secure-fix-typo-in-a-comment.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/604-crypto-inside-secure-remove-useless-memset.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/605-crypto-inside-secure-refrain-from-unneeded-invalidat.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/606-crypto-inside-secure-EBUSY-is-not-an-error-on-async-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/607-crypto-inside-secure-move-cipher-crypto-mode-to-requ.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/608-crypto-inside-secure-remove-unused-parameter-in-inva.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/609-crypto-inside-secure-move-request-dequeueing-into-a-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/610-crypto-inside-secure-use-threaded-IRQs-for-result-ha.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/611-crypto-inside-secure-dequeue-all-requests-at-once.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/612-crypto-inside-secure-increase-the-ring-size.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/613-crypto-inside-secure-acknowledge-the-result-requests.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/614-crypto-inside-secure-handle-more-result-requests-whe.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/615-crypto-inside-secure-retry-to-proceed-the-request-la.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/616-crypto-inside-secure-EIP97-support.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/617-crypto-inside-secure-make-function-safexcel_try_push.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/618-crypto-inside-secure-do-not-overwrite-the-threshold-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/619-crypto-inside-secure-keep-the-requests-push-pop-sync.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/620-crypto-inside-secure-unmap-the-result-in-the-hash-se.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/621-crypto-inside-secure-move-hash-result-dma-mapping-to.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/622-crypto-inside-secure-move-cache-result-dma-mapping-t.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/623-crypto-inside-secure-fix-missing-unlock-on-error-in-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/624-crypto-inside-secure-improve-clock-initialization.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/625-crypto-inside-secure-fix-clock-resource-by-adding-a-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/626-crypto-inside-secure-move-the-digest-to-the-request-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/627-crypto-inside-secure-fix-typo-s-allways-always-in-a-.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/628-crypto-inside-secure-fix-a-typo-in-a-register-name.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/629-crypto-inside-secure-improve-the-send-error-path.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/630-crypto-inside-secure-do-not-access-buffers-mapped-to.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/631-crypto-inside-secure-improve-the-skcipher-token.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/632-crypto-inside-secure-the-context-ipad-opad-should-us.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/633-crypto-inside-secure-hmac-sha256-support.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/634-crypto-inside-secure-hmac-sha224-support.patch
>   create mode 100644 target/linux/mvebu/patches-4.14/635-arm64-dts-marvell-armada-37xx-add-a-crypto-node.patch
>
> diff --git a/target/linux/mvebu/modules.mk b/target/linux/mvebu/modules.mk
> new file mode 100644
> index 0000000000..f0be14fcdb
> --- /dev/null
> +++ b/target/linux/mvebu/modules.mk
> @@ -0,0 +1,25 @@
> +define KernelPackage/crypto-hw-safexcel
> +  TITLE:= MVEBU SafeXcel Crypto Engine module
> +  DEPENDS:=@TARGET_mvebu
> +  KCONFIG:= \
> +	CONFIG_CRYPTO_HW=y \
> +	CONFIG_CRYPTO_AES=y \
> +	CONFIG_CRYPTO_BLKCIPHER=y \
> +	CONFIG_CRYPTO_HASH=y \
> +	CONFIG_CRYPTO_HMAC=y \
> +	CONFIG_CRYPTO_SHA1=y \
> +	CONFIG_CRYPTO_SHA256=y \
> +	CONFIG_CRYPTO_SHA512=y \
> +	CONFIG_CRYPTO_DEV_SAFEXCEL
> +  FILES:=$(LINUX_DIR)/drivers/crypto/inside-secure/crypto_safexcel.ko
> +  AUTOLOAD:=$(call AutoLoad,90,crypto_safexcel)
> +  $(call AddDepends/crypto)
> +endef
> +
> +define KernelPackage/crypto-hw-safexcel/description
> +	MVEBU's EIP97 Cryptographic Engine driver designed by Inside Secure.
> +	This is found on Marvell Armada 37xx/7k/8k SoCs, for example on
> +	EspressoBin.
> +endef
> +
> +$(eval $(call KernelPackage,crypto-hw-safexcel))
> diff --git a/target/linux/mvebu/patches-4.14/600-crypto-inside-secure-remove-null-check-before-kfree.patch b/target/linux/mvebu/patches-4.14/600-crypto-inside-secure-remove-null-check-before-kfree.patch
> new file mode 100644
> index 0000000000..4abad3f62e
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/600-crypto-inside-secure-remove-null-check-before-kfree.patch
> @@ -0,0 +1,33 @@
> +From f546f46310528adb05b9fbbd51b7a17d2e59784f Mon Sep 17 00:00:00 2001
> +From: Himanshu Jha <himanshujha199640 at gmail.com>
> +Date: Sun, 27 Aug 2017 02:45:30 +0530
> +Subject: [PATCH 01/36] crypto: inside-secure - remove null check before kfree
> +
> +Kfree on NULL pointer is a no-op and therefore checking is redundant.
> +
> +Signed-off-by: Himanshu Jha <himanshujha199640 at gmail.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 6 ++----
> + 1 file changed, 2 insertions(+), 4 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 69f29776591a..46c2e15c0931 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -326,10 +326,8 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 		ctx->base.cache_sz = 0;
> + 	}
> + free_cache:
> +-	if (ctx->base.cache) {
> +-		kfree(ctx->base.cache);
> +-		ctx->base.cache = NULL;
> +-	}
> ++	kfree(ctx->base.cache);
> ++	ctx->base.cache = NULL;
> +
> + unlock:
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/601-crypto-inside-secure-do-not-use-areq-result-for-part.patch b/target/linux/mvebu/patches-4.14/601-crypto-inside-secure-do-not-use-areq-result-for-part.patch
> new file mode 100644
> index 0000000000..8333740931
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/601-crypto-inside-secure-do-not-use-areq-result-for-part.patch
> @@ -0,0 +1,63 @@
> +From 9623afc293461e83ecfab48df6334f23ba0eb90e Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Mon, 11 Dec 2017 12:10:58 +0100
> +Subject: [PATCH 02/36] crypto: inside-secure - do not use areq->result for
> + partial results
> +
> +This patches update the SafeXcel driver to stop using the crypto
> +ahash_request result field for partial results (i.e. on updates).
> +Instead the driver local safexcel_ahash_req state field is used, and
> +only on final operations the ahash_request result buffer is updated.
> +
> +Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 10 +++++-----
> + 1 file changed, 5 insertions(+), 5 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 46c2e15c0931..c20c4db12190 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -37,7 +37,7 @@ struct safexcel_ahash_req {
> + 	int nents;
> +
> + 	u8 state_sz;    /* expected sate size, only set once */
> +-	u32 state[SHA256_DIGEST_SIZE / sizeof(u32)];
> ++	u32 state[SHA256_DIGEST_SIZE / sizeof(u32)] __aligned(sizeof(u32));
> +
> + 	u64 len;
> + 	u64 processed;
> +@@ -130,7 +130,7 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
> + 	struct ahash_request *areq = ahash_request_cast(async);
> + 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq);
> + 	struct safexcel_ahash_req *sreq = ahash_request_ctx(areq);
> +-	int cache_len, result_sz = sreq->state_sz;
> ++	int cache_len;
> +
> + 	*ret = 0;
> +
> +@@ -151,8 +151,8 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +
> + 	if (sreq->finish)
> +-		result_sz = crypto_ahash_digestsize(ahash);
> +-	memcpy(sreq->state, areq->result, result_sz);
> ++		memcpy(areq->result, sreq->state,
> ++		       crypto_ahash_digestsize(ahash));
> +
> + 	if (sreq->nents) {
> + 		dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
> +@@ -292,7 +292,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 	/* Add the token */
> + 	safexcel_hash_token(first_cdesc, len, req->state_sz);
> +
> +-	ctx->base.result_dma = dma_map_single(priv->dev, areq->result,
> ++	ctx->base.result_dma = dma_map_single(priv->dev, req->state,
> + 					      req->state_sz, DMA_FROM_DEVICE);
> + 	if (dma_mapping_error(priv->dev, ctx->base.result_dma)) {
> + 		ret = -EINVAL;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/602-crypto-inside-secure-remove-extra-empty-line.patch b/target/linux/mvebu/patches-4.14/602-crypto-inside-secure-remove-extra-empty-line.patch
> new file mode 100644
> index 0000000000..2a20bbc98e
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/602-crypto-inside-secure-remove-extra-empty-line.patch
> @@ -0,0 +1,28 @@
> +From 505e74ab4b15db7c9ff13843b1ac3030f905e967 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:43 +0100
> +Subject: [PATCH 03/36] crypto: inside-secure - remove extra empty line
> +
> +Cosmetic patch removing an extra empty line between header inclusions.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 1 -
> + 1 file changed, 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index c20c4db12190..37e7fcd2f54b 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -14,7 +14,6 @@
> + #include <linux/dma-mapping.h>
> + #include <linux/dmapool.h>
> +
> +-
> + #include "safexcel.h"
> +
> + struct safexcel_ahash_ctx {
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/603-crypto-inside-secure-fix-typo-in-a-comment.patch b/target/linux/mvebu/patches-4.14/603-crypto-inside-secure-fix-typo-in-a-comment.patch
> new file mode 100644
> index 0000000000..cbb2a40eee
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/603-crypto-inside-secure-fix-typo-in-a-comment.patch
> @@ -0,0 +1,29 @@
> +From d6f5a9a4252bc5a2fae8cadf1b772ae0b1957f33 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:44 +0100
> +Subject: [PATCH 04/36] crypto: inside-secure - fix typo in a comment
> +
> +Cosmetic patch fixing one typo in one of the driver's comments.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 37e7fcd2f54b..50c28da35b0d 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -522,7 +522,7 @@ static int safexcel_ahash_cache(struct ahash_request *areq)
> + 		return areq->nbytes;
> + 	}
> +
> +-	/* We could'nt cache all the data */
> ++	/* We couldn't cache all the data */
> + 	return -E2BIG;
> + }
> +
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/604-crypto-inside-secure-remove-useless-memset.patch b/target/linux/mvebu/patches-4.14/604-crypto-inside-secure-remove-useless-memset.patch
> new file mode 100644
> index 0000000000..025936aa8f
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/604-crypto-inside-secure-remove-useless-memset.patch
> @@ -0,0 +1,30 @@
> +From 0bfd4e3e21e24ad77b32186d894bd57d08386e20 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:45 +0100
> +Subject: [PATCH 05/36] crypto: inside-secure - remove useless memset
> +
> +This patch removes an useless memset in the ahash_export function, as
> +the zeroed buffer will be entirely overridden the next line.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 1 -
> + 1 file changed, 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 50c28da35b0d..8ed46ff4cbf9 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -642,7 +642,6 @@ static int safexcel_ahash_export(struct ahash_request *areq, void *out)
> + 	export->processed = req->processed;
> +
> + 	memcpy(export->state, req->state, req->state_sz);
> +-	memset(export->cache, 0, crypto_ahash_blocksize(ahash));
> + 	memcpy(export->cache, req->cache, crypto_ahash_blocksize(ahash));
> +
> + 	return 0;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/605-crypto-inside-secure-refrain-from-unneeded-invalidat.patch b/target/linux/mvebu/patches-4.14/605-crypto-inside-secure-refrain-from-unneeded-invalidat.patch
> new file mode 100644
> index 0000000000..9a43a24965
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/605-crypto-inside-secure-refrain-from-unneeded-invalidat.patch
> @@ -0,0 +1,91 @@
> +From a05f52e7856dd8f3ca40960ee45807ef6c4b87cf Mon Sep 17 00:00:00 2001
> +From: Ofer Heifetz <oferh at marvell.com>
> +Date: Thu, 14 Dec 2017 15:26:47 +0100
> +Subject: [PATCH 06/36] crypto: inside-secure - refrain from unneeded
> + invalidations
> +
> +The check to know if an invalidation is needed (i.e. when the context
> +changes) is done even if the context does not exist yet. This happens
> +when first setting a key for ciphers and/or hmac operations.
> +
> +This commits adds a check in the _setkey functions to only check if an
> +invalidation is needed when a context exists, as there is no need to
> +perform this check otherwise.
> +
> +Signed-off-by: Ofer Heifetz <oferh at marvell.com>
> +[Antoine: commit message and added a comment and reworked one of the
> +checks]
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_cipher.c | 10 ++++++----
> + drivers/crypto/inside-secure/safexcel_hash.c   | 24 ++++++++++++++++--------
> + 2 files changed, 22 insertions(+), 12 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
> +index 29cf7e00b574..6d8bc6a3fe5b 100644
> +--- a/drivers/crypto/inside-secure/safexcel_cipher.c
> ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
> +@@ -78,10 +78,12 @@ static int safexcel_aes_setkey(struct crypto_skcipher *ctfm, const u8 *key,
> + 		return ret;
> + 	}
> +
> +-	for (i = 0; i < len / sizeof(u32); i++) {
> +-		if (ctx->key[i] != cpu_to_le32(aes.key_enc[i])) {
> +-			ctx->base.needs_inv = true;
> +-			break;
> ++	if (ctx->base.ctxr_dma) {
> ++		for (i = 0; i < len / sizeof(u32); i++) {
> ++			if (ctx->key[i] != cpu_to_le32(aes.key_enc[i])) {
> ++				ctx->base.needs_inv = true;
> ++				break;
> ++			}
> + 		}
> + 	}
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 8ed46ff4cbf9..955c242da244 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -535,10 +535,16 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq)
> +
> + 	req->needs_inv = false;
> +
> +-	if (req->processed && ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED)
> +-		ctx->base.needs_inv = safexcel_ahash_needs_inv_get(areq);
> +-
> + 	if (ctx->base.ctxr) {
> ++		if (!ctx->base.needs_inv && req->processed &&
> ++		    ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED)
> ++			/* We're still setting needs_inv here, even though it is
> ++			 * cleared right away, because the needs_inv flag can be
> ++			 * set in other functions and we want to keep the same
> ++			 * logic.
> ++			 */
> ++			ctx->base.needs_inv = safexcel_ahash_needs_inv_get(areq);
> ++
> + 		if (ctx->base.needs_inv) {
> + 			ctx->base.needs_inv = false;
> + 			req->needs_inv = true;
> +@@ -936,11 +942,13 @@ static int safexcel_hmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
> + 	if (ret)
> + 		return ret;
> +
> +-	for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++) {
> +-		if (ctx->ipad[i] != le32_to_cpu(istate.state[i]) ||
> +-		    ctx->opad[i] != le32_to_cpu(ostate.state[i])) {
> +-			ctx->base.needs_inv = true;
> +-			break;
> ++	if (ctx->base.ctxr) {
> ++		for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++) {
> ++			if (ctx->ipad[i] != le32_to_cpu(istate.state[i]) ||
> ++			    ctx->opad[i] != le32_to_cpu(ostate.state[i])) {
> ++				ctx->base.needs_inv = true;
> ++				break;
> ++			}
> + 		}
> + 	}
> +
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/606-crypto-inside-secure-EBUSY-is-not-an-error-on-async-.patch b/target/linux/mvebu/patches-4.14/606-crypto-inside-secure-EBUSY-is-not-an-error-on-async-.patch
> new file mode 100644
> index 0000000000..a901c4b7e1
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/606-crypto-inside-secure-EBUSY-is-not-an-error-on-async-.patch
> @@ -0,0 +1,35 @@
> +From ab8541f44b5856339f6ccd3c49b443b25091ff90 Mon Sep 17 00:00:00 2001
> +From: Ofer Heifetz <oferh at marvell.com>
> +Date: Thu, 14 Dec 2017 15:26:48 +0100
> +Subject: [PATCH 07/36] crypto: inside-secure - EBUSY is not an error on async
> + request
> +
> +When initializing the IVs crypto_ahash_update() is called, which at some
> +point will call crypto_enqueue_request(). This function can return
> +-EBUSY when no resource is available and the request is queued. Since
> +this is a valid case, -EBUSY shouldn't be treated as an error.
> +
> +Signed-off-by: Ofer Heifetz <oferh at marvell.com>
> +[Antoine: commit message]
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 2 +-
> + 1 file changed, 1 insertion(+), 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 955c242da244..f32985e56668 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -870,7 +870,7 @@ static int safexcel_hmac_init_iv(struct ahash_request *areq,
> + 	req->last_req = true;
> +
> + 	ret = crypto_ahash_update(areq);
> +-	if (ret && ret != -EINPROGRESS)
> ++	if (ret && ret != -EINPROGRESS && ret != -EBUSY)
> + 		return ret;
> +
> + 	wait_for_completion_interruptible(&result.completion);
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/607-crypto-inside-secure-move-cipher-crypto-mode-to-requ.patch b/target/linux/mvebu/patches-4.14/607-crypto-inside-secure-move-cipher-crypto-mode-to-requ.patch
> new file mode 100644
> index 0000000000..73b6599422
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/607-crypto-inside-secure-move-cipher-crypto-mode-to-requ.patch
> @@ -0,0 +1,76 @@
> +From 27e6b38e3b123978a1eee928e5b7e5aea9349e31 Mon Sep 17 00:00:00 2001
> +From: Ofer Heifetz <oferh at marvell.com>
> +Date: Thu, 14 Dec 2017 15:26:49 +0100
> +Subject: [PATCH 08/36] crypto: inside-secure - move cipher crypto mode to
> + request context
> +
> +The cipher direction can be different for requests within the same
> +transformation context. This patch moves the direction flag from the
> +context to the request scope.
> +
> +Signed-off-by: Ofer Heifetz <oferh at marvell.com>
> +[Antoine: commit message]
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_cipher.c | 11 +++++++----
> + 1 file changed, 7 insertions(+), 4 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
> +index 6d8bc6a3fe5b..5af0c890646d 100644
> +--- a/drivers/crypto/inside-secure/safexcel_cipher.c
> ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
> +@@ -27,7 +27,6 @@ struct safexcel_cipher_ctx {
> + 	struct safexcel_context base;
> + 	struct safexcel_crypto_priv *priv;
> +
> +-	enum safexcel_cipher_direction direction;
> + 	u32 mode;
> +
> + 	__le32 key[8];
> +@@ -35,6 +34,7 @@ struct safexcel_cipher_ctx {
> + };
> +
> + struct safexcel_cipher_req {
> ++	enum safexcel_cipher_direction direction;
> + 	bool needs_inv;
> + };
> +
> +@@ -97,12 +97,15 @@ static int safexcel_aes_setkey(struct crypto_skcipher *ctfm, const u8 *key,
> + }
> +
> + static int safexcel_context_control(struct safexcel_cipher_ctx *ctx,
> ++				    struct crypto_async_request *async,
> + 				    struct safexcel_command_desc *cdesc)
> + {
> + 	struct safexcel_crypto_priv *priv = ctx->priv;
> ++	struct skcipher_request *req = skcipher_request_cast(async);
> ++	struct safexcel_cipher_req *sreq = skcipher_request_ctx(req);
> + 	int ctrl_size;
> +
> +-	if (ctx->direction == SAFEXCEL_ENCRYPT)
> ++	if (sreq->direction == SAFEXCEL_ENCRYPT)
> + 		cdesc->control_data.control0 |= CONTEXT_CONTROL_TYPE_CRYPTO_OUT;
> + 	else
> + 		cdesc->control_data.control0 |= CONTEXT_CONTROL_TYPE_CRYPTO_IN;
> +@@ -245,7 +248,7 @@ static int safexcel_aes_send(struct crypto_async_request *async,
> + 		n_cdesc++;
> +
> + 		if (n_cdesc == 1) {
> +-			safexcel_context_control(ctx, cdesc);
> ++			safexcel_context_control(ctx, async, cdesc);
> + 			safexcel_cipher_token(ctx, async, cdesc, req->cryptlen);
> + 		}
> +
> +@@ -469,7 +472,7 @@ static int safexcel_aes(struct skcipher_request *req,
> + 	int ret, ring;
> +
> + 	sreq->needs_inv = false;
> +-	ctx->direction = dir;
> ++	sreq->direction = dir;
> + 	ctx->mode = mode;
> +
> + 	if (ctx->base.ctxr) {
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/608-crypto-inside-secure-remove-unused-parameter-in-inva.patch b/target/linux/mvebu/patches-4.14/608-crypto-inside-secure-remove-unused-parameter-in-inva.patch
> new file mode 100644
> index 0000000000..1f18e5d446
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/608-crypto-inside-secure-remove-unused-parameter-in-inva.patch
> @@ -0,0 +1,74 @@
> +From 66dbe290b05b716faefb8b13064501e4b503bea3 Mon Sep 17 00:00:00 2001
> +From: Ofer Heifetz <oferh at marvell.com>
> +Date: Thu, 14 Dec 2017 15:26:50 +0100
> +Subject: [PATCH 09/36] crypto: inside-secure - remove unused parameter in
> + invalidate_cache
> +
> +The SafeXcel context isn't used in the cache invalidation function. This
> +cosmetic patch removes it (as well as from the function prototype in the
> +header file and when the function is called).
> +
> +Signed-off-by: Ofer Heifetz <oferh at marvell.com>
> +[Antoine: commit message]
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c        | 1 -
> + drivers/crypto/inside-secure/safexcel.h        | 1 -
> + drivers/crypto/inside-secure/safexcel_cipher.c | 2 +-
> + drivers/crypto/inside-secure/safexcel_hash.c   | 2 +-
> + 4 files changed, 2 insertions(+), 4 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 3ee68ecde9ec..daeefef76f11 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -549,7 +549,6 @@ void safexcel_inv_complete(struct crypto_async_request *req, int error)
> + }
> +
> + int safexcel_invalidate_cache(struct crypto_async_request *async,
> +-			      struct safexcel_context *ctx,
> + 			      struct safexcel_crypto_priv *priv,
> + 			      dma_addr_t ctxr_dma, int ring,
> + 			      struct safexcel_request *request)
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 304c5838c11a..d12c2b479a5e 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -539,7 +539,6 @@ void safexcel_free_context(struct safexcel_crypto_priv *priv,
> + 				  struct crypto_async_request *req,
> + 				  int result_sz);
> + int safexcel_invalidate_cache(struct crypto_async_request *async,
> +-			      struct safexcel_context *ctx,
> + 			      struct safexcel_crypto_priv *priv,
> + 			      dma_addr_t ctxr_dma, int ring,
> + 			      struct safexcel_request *request);
> +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
> +index 5af0c890646d..f5ffae2808a8 100644
> +--- a/drivers/crypto/inside-secure/safexcel_cipher.c
> ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
> +@@ -395,7 +395,7 @@ static int safexcel_cipher_send_inv(struct crypto_async_request *async,
> + 	struct safexcel_crypto_priv *priv = ctx->priv;
> + 	int ret;
> +
> +-	ret = safexcel_invalidate_cache(async, &ctx->base, priv,
> ++	ret = safexcel_invalidate_cache(async, priv,
> + 					ctx->base.ctxr_dma, ring, request);
> + 	if (unlikely(ret))
> + 		return ret;
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index f32985e56668..328ce02ac050 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -435,7 +435,7 @@ static int safexcel_ahash_send_inv(struct crypto_async_request *async,
> + 	struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq));
> + 	int ret;
> +
> +-	ret = safexcel_invalidate_cache(async, &ctx->base, ctx->priv,
> ++	ret = safexcel_invalidate_cache(async, ctx->priv,
> + 					ctx->base.ctxr_dma, ring, request);
> + 	if (unlikely(ret))
> + 		return ret;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/609-crypto-inside-secure-move-request-dequeueing-into-a-.patch b/target/linux/mvebu/patches-4.14/609-crypto-inside-secure-move-request-dequeueing-into-a-.patch
> new file mode 100644
> index 0000000000..a6f015abf2
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/609-crypto-inside-secure-move-request-dequeueing-into-a-.patch
> @@ -0,0 +1,204 @@
> +From fb445c38843fa3651d01966c62a443a9435a1449 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:51 +0100
> +Subject: [PATCH 10/36] crypto: inside-secure - move request dequeueing into a
> + workqueue
> +
> +This patch moves the request dequeueing into a workqueue to improve the
> +coalescing of interrupts when sending requests to the engine; as the
> +engine is capable of having one single interrupt for n requests sent.
> +Using a workqueue allows to send more request at once.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c        | 29 ++++++++++++++------------
> + drivers/crypto/inside-secure/safexcel.h        |  2 +-
> + drivers/crypto/inside-secure/safexcel_cipher.c | 12 +++++------
> + drivers/crypto/inside-secure/safexcel_hash.c   | 12 +++++------
> + 4 files changed, 29 insertions(+), 26 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index daeefef76f11..9043ab8c98cb 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -429,8 +429,6 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 	struct safexcel_request *request;
> + 	int ret, nreq = 0, cdesc = 0, rdesc = 0, commands, results;
> +
> +-	priv->ring[ring].need_dequeue = false;
> +-
> + 	do {
> + 		spin_lock_bh(&priv->ring[ring].queue_lock);
> + 		backlog = crypto_get_backlog(&priv->ring[ring].queue);
> +@@ -445,8 +443,6 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 			spin_lock_bh(&priv->ring[ring].queue_lock);
> + 			crypto_enqueue_request(&priv->ring[ring].queue, req);
> + 			spin_unlock_bh(&priv->ring[ring].queue_lock);
> +-
> +-			priv->ring[ring].need_dequeue = true;
> + 			goto finalize;
> + 		}
> +
> +@@ -455,7 +451,6 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 		if (ret) {
> + 			kfree(request);
> + 			req->complete(req, ret);
> +-			priv->ring[ring].need_dequeue = true;
> + 			goto finalize;
> + 		}
> +
> +@@ -480,9 +475,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 	} while (nreq++ < EIP197_MAX_BATCH_SZ);
> +
> + finalize:
> +-	if (nreq == EIP197_MAX_BATCH_SZ)
> +-		priv->ring[ring].need_dequeue = true;
> +-	else if (!nreq)
> ++	if (!nreq)
> + 		return;
> +
> + 	spin_lock_bh(&priv->ring[ring].lock);
> +@@ -637,13 +630,18 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + static void safexcel_handle_result_work(struct work_struct *work)
> + {
> + 	struct safexcel_work_data *data =
> +-			container_of(work, struct safexcel_work_data, work);
> ++			container_of(work, struct safexcel_work_data, result_work);
> + 	struct safexcel_crypto_priv *priv = data->priv;
> +
> + 	safexcel_handle_result_descriptor(priv, data->ring);
> ++}
> ++
> ++static void safexcel_dequeue_work(struct work_struct *work)
> ++{
> ++	struct safexcel_work_data *data =
> ++			container_of(work, struct safexcel_work_data, work);
> +
> +-	if (priv->ring[data->ring].need_dequeue)
> +-		safexcel_dequeue(data->priv, data->ring);
> ++	safexcel_dequeue(data->priv, data->ring);
> + }
> +
> + struct safexcel_ring_irq_data {
> +@@ -674,7 +672,10 @@ static irqreturn_t safexcel_irq_ring(int irq, void *data)
> + 			 */
> + 			dev_err(priv->dev, "RDR: fatal error.");
> + 		} else if (likely(stat & EIP197_xDR_THRESH)) {
> +-			queue_work(priv->ring[ring].workqueue, &priv->ring[ring].work_data.work);
> ++			queue_work(priv->ring[ring].workqueue,
> ++				   &priv->ring[ring].work_data.result_work);
> ++			queue_work(priv->ring[ring].workqueue,
> ++				   &priv->ring[ring].work_data.work);
> + 		}
> +
> + 		/* ACK the interrupts */
> +@@ -855,7 +856,9 @@ static int safexcel_probe(struct platform_device *pdev)
> +
> + 		priv->ring[i].work_data.priv = priv;
> + 		priv->ring[i].work_data.ring = i;
> +-		INIT_WORK(&priv->ring[i].work_data.work, safexcel_handle_result_work);
> ++		INIT_WORK(&priv->ring[i].work_data.result_work,
> ++			  safexcel_handle_result_work);
> ++		INIT_WORK(&priv->ring[i].work_data.work, safexcel_dequeue_work);
> +
> + 		snprintf(wq_name, 9, "wq_ring%d", i);
> + 		priv->ring[i].workqueue = create_singlethread_workqueue(wq_name);
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index d12c2b479a5e..8e9c65183439 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -459,6 +459,7 @@ struct safexcel_config {
> +
> + struct safexcel_work_data {
> + 	struct work_struct work;
> ++	struct work_struct result_work;
> + 	struct safexcel_crypto_priv *priv;
> + 	int ring;
> + };
> +@@ -489,7 +490,6 @@ struct safexcel_crypto_priv {
> + 		/* queue */
> + 		struct crypto_queue queue;
> + 		spinlock_t queue_lock;
> +-		bool need_dequeue;
> + 	} ring[EIP197_MAX_RINGS];
> + };
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
> +index f5ffae2808a8..7c9a2d87135b 100644
> +--- a/drivers/crypto/inside-secure/safexcel_cipher.c
> ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
> +@@ -358,8 +358,8 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv,
> + 	if (enq_ret != -EINPROGRESS)
> + 		*ret = enq_ret;
> +
> +-	if (!priv->ring[ring].need_dequeue)
> +-		safexcel_dequeue(priv, ring);
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> +
> + 	*should_complete = false;
> +
> +@@ -448,8 +448,8 @@ static int safexcel_cipher_exit_inv(struct crypto_tfm *tfm)
> + 	crypto_enqueue_request(&priv->ring[ring].queue, &req->base);
> + 	spin_unlock_bh(&priv->ring[ring].queue_lock);
> +
> +-	if (!priv->ring[ring].need_dequeue)
> +-		safexcel_dequeue(priv, ring);
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> +
> + 	wait_for_completion(&result.completion);
> +
> +@@ -495,8 +495,8 @@ static int safexcel_aes(struct skcipher_request *req,
> + 	ret = crypto_enqueue_request(&priv->ring[ring].queue, &req->base);
> + 	spin_unlock_bh(&priv->ring[ring].queue_lock);
> +
> +-	if (!priv->ring[ring].need_dequeue)
> +-		safexcel_dequeue(priv, ring);
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> +
> + 	return ret;
> + }
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 328ce02ac050..6912c032200b 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -399,8 +399,8 @@ static int safexcel_handle_inv_result(struct safexcel_crypto_priv *priv,
> + 	if (enq_ret != -EINPROGRESS)
> + 		*ret = enq_ret;
> +
> +-	if (!priv->ring[ring].need_dequeue)
> +-		safexcel_dequeue(priv, ring);
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> +
> + 	*should_complete = false;
> +
> +@@ -488,8 +488,8 @@ static int safexcel_ahash_exit_inv(struct crypto_tfm *tfm)
> + 	crypto_enqueue_request(&priv->ring[ring].queue, &req->base);
> + 	spin_unlock_bh(&priv->ring[ring].queue_lock);
> +
> +-	if (!priv->ring[ring].need_dequeue)
> +-		safexcel_dequeue(priv, ring);
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> +
> + 	wait_for_completion(&result.completion);
> +
> +@@ -564,8 +564,8 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq)
> + 	ret = crypto_enqueue_request(&priv->ring[ring].queue, &areq->base);
> + 	spin_unlock_bh(&priv->ring[ring].queue_lock);
> +
> +-	if (!priv->ring[ring].need_dequeue)
> +-		safexcel_dequeue(priv, ring);
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> +
> + 	return ret;
> + }
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/610-crypto-inside-secure-use-threaded-IRQs-for-result-ha.patch b/target/linux/mvebu/patches-4.14/610-crypto-inside-secure-use-threaded-IRQs-for-result-ha.patch
> new file mode 100644
> index 0000000000..968c9107c8
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/610-crypto-inside-secure-use-threaded-IRQs-for-result-ha.patch
> @@ -0,0 +1,136 @@
> +From d9ed66ebd6731cde146ae4ff47965c91e05e9267 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:52 +0100
> +Subject: [PATCH 11/36] crypto: inside-secure - use threaded IRQs for result
> + handling
> +
> +This patch moves the result handling from an IRQ handler to a threaded
> +IRQ handler, to improve the number of complete requests being handled at
> +once.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 41 ++++++++++++++++++---------------
> + drivers/crypto/inside-secure/safexcel.h |  1 -
> + 2 files changed, 22 insertions(+), 20 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 9043ab8c98cb..4931d21f63f7 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -627,15 +627,6 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + 	}
> + }
> +
> +-static void safexcel_handle_result_work(struct work_struct *work)
> +-{
> +-	struct safexcel_work_data *data =
> +-			container_of(work, struct safexcel_work_data, result_work);
> +-	struct safexcel_crypto_priv *priv = data->priv;
> +-
> +-	safexcel_handle_result_descriptor(priv, data->ring);
> +-}
> +-
> + static void safexcel_dequeue_work(struct work_struct *work)
> + {
> + 	struct safexcel_work_data *data =
> +@@ -653,12 +644,12 @@ static irqreturn_t safexcel_irq_ring(int irq, void *data)
> + {
> + 	struct safexcel_ring_irq_data *irq_data = data;
> + 	struct safexcel_crypto_priv *priv = irq_data->priv;
> +-	int ring = irq_data->ring;
> ++	int ring = irq_data->ring, rc = IRQ_NONE;
> + 	u32 status, stat;
> +
> + 	status = readl(priv->base + EIP197_HIA_AIC_R_ENABLED_STAT(ring));
> + 	if (!status)
> +-		return IRQ_NONE;
> ++		return rc;
> +
> + 	/* RDR interrupts */
> + 	if (status & EIP197_RDR_IRQ(ring)) {
> +@@ -672,10 +663,7 @@ static irqreturn_t safexcel_irq_ring(int irq, void *data)
> + 			 */
> + 			dev_err(priv->dev, "RDR: fatal error.");
> + 		} else if (likely(stat & EIP197_xDR_THRESH)) {
> +-			queue_work(priv->ring[ring].workqueue,
> +-				   &priv->ring[ring].work_data.result_work);
> +-			queue_work(priv->ring[ring].workqueue,
> +-				   &priv->ring[ring].work_data.work);
> ++			rc = IRQ_WAKE_THREAD;
> + 		}
> +
> + 		/* ACK the interrupts */
> +@@ -686,11 +674,26 @@ static irqreturn_t safexcel_irq_ring(int irq, void *data)
> + 	/* ACK the interrupts */
> + 	writel(status, priv->base + EIP197_HIA_AIC_R_ACK(ring));
> +
> ++	return rc;
> ++}
> ++
> ++static irqreturn_t safexcel_irq_ring_thread(int irq, void *data)
> ++{
> ++	struct safexcel_ring_irq_data *irq_data = data;
> ++	struct safexcel_crypto_priv *priv = irq_data->priv;
> ++	int ring = irq_data->ring;
> ++
> ++	safexcel_handle_result_descriptor(priv, ring);
> ++
> ++	queue_work(priv->ring[ring].workqueue,
> ++		   &priv->ring[ring].work_data.work);
> ++
> + 	return IRQ_HANDLED;
> + }
> +
> + static int safexcel_request_ring_irq(struct platform_device *pdev, const char *name,
> + 				     irq_handler_t handler,
> ++				     irq_handler_t threaded_handler,
> + 				     struct safexcel_ring_irq_data *ring_irq_priv)
> + {
> + 	int ret, irq = platform_get_irq_byname(pdev, name);
> +@@ -700,8 +703,9 @@ static int safexcel_request_ring_irq(struct platform_device *pdev, const char *n
> + 		return irq;
> + 	}
> +
> +-	ret = devm_request_irq(&pdev->dev, irq, handler, 0,
> +-			       dev_name(&pdev->dev), ring_irq_priv);
> ++	ret = devm_request_threaded_irq(&pdev->dev, irq, handler,
> ++					threaded_handler, IRQF_ONESHOT,
> ++					dev_name(&pdev->dev), ring_irq_priv);
> + 	if (ret) {
> + 		dev_err(&pdev->dev, "unable to request IRQ %d\n", irq);
> + 		return ret;
> +@@ -848,6 +852,7 @@ static int safexcel_probe(struct platform_device *pdev)
> +
> + 		snprintf(irq_name, 6, "ring%d", i);
> + 		irq = safexcel_request_ring_irq(pdev, irq_name, safexcel_irq_ring,
> ++						safexcel_irq_ring_thread,
> + 						ring_irq);
> + 		if (irq < 0) {
> + 			ret = irq;
> +@@ -856,8 +861,6 @@ static int safexcel_probe(struct platform_device *pdev)
> +
> + 		priv->ring[i].work_data.priv = priv;
> + 		priv->ring[i].work_data.ring = i;
> +-		INIT_WORK(&priv->ring[i].work_data.result_work,
> +-			  safexcel_handle_result_work);
> + 		INIT_WORK(&priv->ring[i].work_data.work, safexcel_dequeue_work);
> +
> + 		snprintf(wq_name, 9, "wq_ring%d", i);
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 8e9c65183439..fffddefb0d9b 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -459,7 +459,6 @@ struct safexcel_config {
> +
> + struct safexcel_work_data {
> + 	struct work_struct work;
> +-	struct work_struct result_work;
> + 	struct safexcel_crypto_priv *priv;
> + 	int ring;
> + };
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/611-crypto-inside-secure-dequeue-all-requests-at-once.patch b/target/linux/mvebu/patches-4.14/611-crypto-inside-secure-dequeue-all-requests-at-once.patch
> new file mode 100644
> index 0000000000..4926c22323
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/611-crypto-inside-secure-dequeue-all-requests-at-once.patch
> @@ -0,0 +1,179 @@
> +From 475983c763e5d2090a16abf326dc895dd184d3f0 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:53 +0100
> +Subject: [PATCH 12/36] crypto: inside-secure - dequeue all requests at once
> +
> +This patch updates the dequeueing logic to dequeue all requests at once.
> +Since we can have many requests in the queue, the interrupt coalescing
> +is kept so that the ring interrupt fires every EIP197_MAX_BATCH_SZ at
> +most.
> +
> +To allow dequeueing all requests at once while still using reasonable
> +settings for the interrupt coalescing, the result handling function was
> +updated to setup the threshold interrupt when needed (i.e. when more
> +requests than EIP197_MAX_BATCH_SZ are in the queue). When using this
> +capability the ring is marked as busy so that the dequeue function
> +enqueue new requests without setting the threshold interrupt.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 60 ++++++++++++++++++++++++++-------
> + drivers/crypto/inside-secure/safexcel.h |  8 +++++
> + 2 files changed, 56 insertions(+), 12 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 4931d21f63f7..2b32ca5eafbd 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -422,6 +422,23 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	return 0;
> + }
> +
> ++/* Called with ring's lock taken */
> ++int safexcel_try_push_requests(struct safexcel_crypto_priv *priv, int ring,
> ++			       int reqs)
> ++{
> ++	int coal = min_t(int, reqs, EIP197_MAX_BATCH_SZ);
> ++
> ++	if (!coal)
> ++		return 0;
> ++
> ++	/* Configure when we want an interrupt */
> ++	writel(EIP197_HIA_RDR_THRESH_PKT_MODE |
> ++	       EIP197_HIA_RDR_THRESH_PROC_PKT(coal),
> ++	       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_THRESH);
> ++
> ++	return coal;
> ++}
> ++
> + void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + {
> + 	struct crypto_async_request *req, *backlog;
> +@@ -429,7 +446,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 	struct safexcel_request *request;
> + 	int ret, nreq = 0, cdesc = 0, rdesc = 0, commands, results;
> +
> +-	do {
> ++	while (true) {
> + 		spin_lock_bh(&priv->ring[ring].queue_lock);
> + 		backlog = crypto_get_backlog(&priv->ring[ring].queue);
> + 		req = crypto_dequeue_request(&priv->ring[ring].queue);
> +@@ -472,18 +489,24 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> +
> + 		cdesc += commands;
> + 		rdesc += results;
> +-	} while (nreq++ < EIP197_MAX_BATCH_SZ);
> ++		nreq++;
> ++	}
> +
> + finalize:
> + 	if (!nreq)
> + 		return;
> +
> +-	spin_lock_bh(&priv->ring[ring].lock);
> ++	spin_lock_bh(&priv->ring[ring].egress_lock);
> +
> +-	/* Configure when we want an interrupt */
> +-	writel(EIP197_HIA_RDR_THRESH_PKT_MODE |
> +-	       EIP197_HIA_RDR_THRESH_PROC_PKT(nreq),
> +-	       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_THRESH);
> ++	if (!priv->ring[ring].busy) {
> ++		nreq -= safexcel_try_push_requests(priv, ring, nreq);
> ++		if (nreq)
> ++			priv->ring[ring].busy = true;
> ++	}
> ++
> ++	priv->ring[ring].requests_left += nreq;
> ++
> ++	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +
> + 	/* let the RDR know we have pending descriptors */
> + 	writel((rdesc * priv->config.rd_offset) << 2,
> +@@ -492,8 +515,6 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 	/* let the CDR know we have pending descriptors */
> + 	writel((cdesc * priv->config.cd_offset) << 2,
> + 	       priv->base + EIP197_HIA_CDR(ring) + EIP197_HIA_xDR_PREP_COUNT);
> +-
> +-	spin_unlock_bh(&priv->ring[ring].lock);
> + }
> +
> + void safexcel_free_context(struct safexcel_crypto_priv *priv,
> +@@ -588,14 +609,14 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + {
> + 	struct safexcel_request *sreq;
> + 	struct safexcel_context *ctx;
> +-	int ret, i, nreq, ndesc = 0;
> ++	int ret, i, nreq, ndesc = 0, done;
> + 	bool should_complete;
> +
> + 	nreq = readl(priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> + 	nreq >>= 24;
> + 	nreq &= GENMASK(6, 0);
> + 	if (!nreq)
> +-		return;
> ++		goto requests_left;
> +
> + 	for (i = 0; i < nreq; i++) {
> + 		spin_lock_bh(&priv->ring[ring].egress_lock);
> +@@ -610,7 +631,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + 		if (ndesc < 0) {
> + 			kfree(sreq);
> + 			dev_err(priv->dev, "failed to handle result (%d)", ndesc);
> +-			return;
> ++			goto requests_left;
> + 		}
> +
> + 		writel(EIP197_xDR_PROC_xD_PKT(1) |
> +@@ -625,6 +646,18 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> +
> + 		kfree(sreq);
> + 	}
> ++
> ++requests_left:
> ++	spin_lock_bh(&priv->ring[ring].egress_lock);
> ++
> ++	done = safexcel_try_push_requests(priv, ring,
> ++					  priv->ring[ring].requests_left);
> ++
> ++	priv->ring[ring].requests_left -= done;
> ++	if (!done && !priv->ring[ring].requests_left)
> ++		priv->ring[ring].busy = false;
> ++
> ++	spin_unlock_bh(&priv->ring[ring].egress_lock);
> + }
> +
> + static void safexcel_dequeue_work(struct work_struct *work)
> +@@ -870,6 +903,9 @@ static int safexcel_probe(struct platform_device *pdev)
> + 			goto err_clk;
> + 		}
> +
> ++		priv->ring[i].requests_left = 0;
> ++		priv->ring[i].busy = false;
> ++
> + 		crypto_init_queue(&priv->ring[i].queue,
> + 				  EIP197_DEFAULT_RING_SIZE);
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index fffddefb0d9b..531e3e9d8384 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -489,6 +489,14 @@ struct safexcel_crypto_priv {
> + 		/* queue */
> + 		struct crypto_queue queue;
> + 		spinlock_t queue_lock;
> ++
> ++		/* Number of requests in the engine that needs the threshold
> ++		 * interrupt to be set up.
> ++		 */
> ++		int requests_left;
> ++
> ++		/* The ring is currently handling at least one request */
> ++		bool busy;
> + 	} ring[EIP197_MAX_RINGS];
> + };
> +
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/612-crypto-inside-secure-increase-the-ring-size.patch b/target/linux/mvebu/patches-4.14/612-crypto-inside-secure-increase-the-ring-size.patch
> new file mode 100644
> index 0000000000..7b5704fa6b
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/612-crypto-inside-secure-increase-the-ring-size.patch
> @@ -0,0 +1,37 @@
> +From 0960fa4f79857635412051c61c9fb451a91e79b8 Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:54 +0100
> +Subject: [PATCH 13/36] crypto: inside-secure - increase the ring size
> +
> +Increase the ring size to handle more requests in parallel, while
> +keeping the batch size (for interrupt coalescing) to its previous value.
> +The ring size and batch size are now unlinked.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.h | 4 ++--
> + 1 file changed, 2 insertions(+), 2 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 531e3e9d8384..2a0ab6ce716a 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -19,11 +19,11 @@
> + #define EIP197_HIA_VERSION_BE			0x35ca
> +
> + /* Static configuration */
> +-#define EIP197_DEFAULT_RING_SIZE		64
> ++#define EIP197_DEFAULT_RING_SIZE		400
> + #define EIP197_MAX_TOKENS			5
> + #define EIP197_MAX_RINGS			4
> + #define EIP197_FETCH_COUNT			1
> +-#define EIP197_MAX_BATCH_SZ			EIP197_DEFAULT_RING_SIZE
> ++#define EIP197_MAX_BATCH_SZ			64
> +
> + #define EIP197_GFP_FLAGS(base)	((base).flags & CRYPTO_TFM_REQ_MAY_SLEEP ? \
> + 				 GFP_KERNEL : GFP_ATOMIC)
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/613-crypto-inside-secure-acknowledge-the-result-requests.patch b/target/linux/mvebu/patches-4.14/613-crypto-inside-secure-acknowledge-the-result-requests.patch
> new file mode 100644
> index 0000000000..ace53613ee
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/613-crypto-inside-secure-acknowledge-the-result-requests.patch
> @@ -0,0 +1,62 @@
> +From 28251290d2f7f45e9fc8c9a45d6e60bd5954c78f Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:55 +0100
> +Subject: [PATCH 14/36] crypto: inside-secure - acknowledge the result requests
> + all at once
> +
> +This patches moves the result request acknowledgment from a per request
> +process to acknowledging all the result requests handled at once.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 16 ++++++++++------
> + 1 file changed, 10 insertions(+), 6 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 2b32ca5eafbd..af79bc751ad1 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -609,7 +609,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + {
> + 	struct safexcel_request *sreq;
> + 	struct safexcel_context *ctx;
> +-	int ret, i, nreq, ndesc = 0, done;
> ++	int ret, i, nreq, ndesc = 0, tot_descs = 0, done;
> + 	bool should_complete;
> +
> + 	nreq = readl(priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> +@@ -631,13 +631,9 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + 		if (ndesc < 0) {
> + 			kfree(sreq);
> + 			dev_err(priv->dev, "failed to handle result (%d)", ndesc);
> +-			goto requests_left;
> ++			goto acknowledge;
> + 		}
> +
> +-		writel(EIP197_xDR_PROC_xD_PKT(1) |
> +-		       EIP197_xDR_PROC_xD_COUNT(ndesc * priv->config.rd_offset),
> +-		       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> +-
> + 		if (should_complete) {
> + 			local_bh_disable();
> + 			sreq->req->complete(sreq->req, ret);
> +@@ -645,6 +641,14 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + 		}
> +
> + 		kfree(sreq);
> ++		tot_descs += ndesc;
> ++	}
> ++
> ++acknowledge:
> ++	if (i) {
> ++		writel(EIP197_xDR_PROC_xD_PKT(i) |
> ++		       EIP197_xDR_PROC_xD_COUNT(tot_descs * priv->config.rd_offset),
> ++		       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> + 	}
> +
> + requests_left:
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/614-crypto-inside-secure-handle-more-result-requests-whe.patch b/target/linux/mvebu/patches-4.14/614-crypto-inside-secure-handle-more-result-requests-whe.patch
> new file mode 100644
> index 0000000000..c90e1401f5
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/614-crypto-inside-secure-handle-more-result-requests-whe.patch
> @@ -0,0 +1,70 @@
> +From c63ff238eca83cae2e1f69c2ca350d2c1879bcdb Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:56 +0100
> +Subject: [PATCH 15/36] crypto: inside-secure - handle more result requests
> + when counter is full
> +
> +This patch modifies the result handling logic to continue handling
> +results when the completed requests counter is full and not showing the
> +actual number of requests to handle.
> +
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 15 ++++++++++++---
> + drivers/crypto/inside-secure/safexcel.h |  2 ++
> + 2 files changed, 14 insertions(+), 3 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index af79bc751ad1..dec1925cf0ad 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -609,12 +609,15 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + {
> + 	struct safexcel_request *sreq;
> + 	struct safexcel_context *ctx;
> +-	int ret, i, nreq, ndesc = 0, tot_descs = 0, done;
> ++	int ret, i, nreq, ndesc, tot_descs, done;
> + 	bool should_complete;
> +
> ++handle_results:
> ++	tot_descs = 0;
> ++
> + 	nreq = readl(priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> +-	nreq >>= 24;
> +-	nreq &= GENMASK(6, 0);
> ++	nreq >>= EIP197_xDR_PROC_xD_PKT_OFFSET;
> ++	nreq &= EIP197_xDR_PROC_xD_PKT_MASK;
> + 	if (!nreq)
> + 		goto requests_left;
> +
> +@@ -651,6 +654,12 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + 		       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> + 	}
> +
> ++	/* If the number of requests overflowed the counter, try to proceed more
> ++	 * requests.
> ++	 */
> ++	if (nreq == EIP197_xDR_PROC_xD_PKT_MASK)
> ++		goto handle_results;
> ++
> + requests_left:
> + 	spin_lock_bh(&priv->ring[ring].egress_lock);
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 2a0ab6ce716a..0c47e792192d 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -117,6 +117,8 @@
> + #define EIP197_xDR_PREP_CLR_COUNT		BIT(31)
> +
> + /* EIP197_HIA_xDR_PROC_COUNT */
> ++#define EIP197_xDR_PROC_xD_PKT_OFFSET		24
> ++#define EIP197_xDR_PROC_xD_PKT_MASK		GENMASK(6, 0)
> + #define EIP197_xDR_PROC_xD_COUNT(n)		((n) << 2)
> + #define EIP197_xDR_PROC_xD_PKT(n)		((n) << 24)
> + #define EIP197_xDR_PROC_CLR_COUNT		BIT(31)
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/615-crypto-inside-secure-retry-to-proceed-the-request-la.patch b/target/linux/mvebu/patches-4.14/615-crypto-inside-secure-retry-to-proceed-the-request-la.patch
> new file mode 100644
> index 0000000000..cfe901e83e
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/615-crypto-inside-secure-retry-to-proceed-the-request-la.patch
> @@ -0,0 +1,103 @@
> +From 929a581f8afd82685be4433cf39bcaa70606adce Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:57 +0100
> +Subject: [PATCH 16/36] crypto: inside-secure - retry to proceed the request
> + later on fail
> +
> +The dequeueing function was putting back a request in the crypto queue
> +on failure (when not enough resources are available) which is not
> +perfect as the request will be handled much later. This patch updates
> +this logic by keeping a reference on the failed request to try
> +proceeding it later when enough resources are available.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 32 +++++++++++++++++++++++---------
> + drivers/crypto/inside-secure/safexcel.h |  6 ++++++
> + 2 files changed, 29 insertions(+), 9 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index dec1925cf0ad..0c0199a65337 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -446,29 +446,36 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 	struct safexcel_request *request;
> + 	int ret, nreq = 0, cdesc = 0, rdesc = 0, commands, results;
> +
> ++	/* If a request wasn't properly dequeued because of a lack of resources,
> ++	 * proceeded it first,
> ++	 */
> ++	req = priv->ring[ring].req;
> ++	backlog = priv->ring[ring].backlog;
> ++	if (req)
> ++		goto handle_req;
> ++
> + 	while (true) {
> + 		spin_lock_bh(&priv->ring[ring].queue_lock);
> + 		backlog = crypto_get_backlog(&priv->ring[ring].queue);
> + 		req = crypto_dequeue_request(&priv->ring[ring].queue);
> + 		spin_unlock_bh(&priv->ring[ring].queue_lock);
> +
> +-		if (!req)
> ++		if (!req) {
> ++			priv->ring[ring].req = NULL;
> ++			priv->ring[ring].backlog = NULL;
> + 			goto finalize;
> ++		}
> +
> ++handle_req:
> + 		request = kzalloc(sizeof(*request), EIP197_GFP_FLAGS(*req));
> +-		if (!request) {
> +-			spin_lock_bh(&priv->ring[ring].queue_lock);
> +-			crypto_enqueue_request(&priv->ring[ring].queue, req);
> +-			spin_unlock_bh(&priv->ring[ring].queue_lock);
> +-			goto finalize;
> +-		}
> ++		if (!request)
> ++			goto request_failed;
> +
> + 		ctx = crypto_tfm_ctx(req->tfm);
> + 		ret = ctx->send(req, ring, request, &commands, &results);
> + 		if (ret) {
> + 			kfree(request);
> +-			req->complete(req, ret);
> +-			goto finalize;
> ++			goto request_failed;
> + 		}
> +
> + 		if (backlog)
> +@@ -492,6 +499,13 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 		nreq++;
> + 	}
> +
> ++request_failed:
> ++	/* Not enough resources to handle all the requests. Bail out and save
> ++	 * the request and the backlog for the next dequeue call (per-ring).
> ++	 */
> ++	priv->ring[ring].req = req;
> ++	priv->ring[ring].backlog = backlog;
> ++
> + finalize:
> + 	if (!nreq)
> + 		return;
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 0c47e792192d..d4955abf873b 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -499,6 +499,12 @@ struct safexcel_crypto_priv {
> +
> + 		/* The ring is currently handling at least one request */
> + 		bool busy;
> ++
> ++		/* Store for current requests when bailing out of the dequeueing
> ++		 * function when no enough resources are available.
> ++		 */
> ++		struct crypto_async_request *req;
> ++		struct crypto_async_request *backlog;
> + 	} ring[EIP197_MAX_RINGS];
> + };
> +
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/616-crypto-inside-secure-EIP97-support.patch b/target/linux/mvebu/patches-4.14/616-crypto-inside-secure-EIP97-support.patch
> new file mode 100644
> index 0000000000..e9e36099e9
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/616-crypto-inside-secure-EIP97-support.patch
> @@ -0,0 +1,841 @@
> +From 90f63fbb6616deae34fc588b4227a03578c5852a Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Antoine=20T=C3=A9nart?= <antoine.tenart at free-electrons.com>
> +Date: Thu, 14 Dec 2017 15:26:58 +0100
> +Subject: [PATCH 17/36] crypto: inside-secure - EIP97 support
> +
> +The Inside Secure SafeXcel driver was firstly designed to support the
> +EIP197 cryptographic engine which is an evolution (with much more
> +feature, better performances) of the EIP97 cryptographic engine. This
> +patch convert the Inside Secure SafeXcel driver to support both engines
> +(EIP97 + EIP197).
> +
> +The main differences are the register offsets and the context
> +invalidation process which is EIP197 specific. This patch adds an
> +indirection on the register offsets and adds checks not to send any
> +invalidation request when driving the EIP97. A new compatible is added
> +as well to bind the driver from device trees.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c        | 212 +++++++++++++++----------
> + drivers/crypto/inside-secure/safexcel.h        | 151 ++++++++++++------
> + drivers/crypto/inside-secure/safexcel_cipher.c |  20 ++-
> + drivers/crypto/inside-secure/safexcel_hash.c   |  19 ++-
> + 4 files changed, 264 insertions(+), 138 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 0c0199a65337..b0787f5f62ad 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -108,10 +108,10 @@ static void eip197_write_firmware(struct safexcel_crypto_priv *priv,
> + 	writel(EIP197_PE_ICE_x_CTRL_SW_RESET |
> + 	       EIP197_PE_ICE_x_CTRL_CLR_ECC_CORR |
> + 	       EIP197_PE_ICE_x_CTRL_CLR_ECC_NON_CORR,
> +-	       priv->base + ctrl);
> ++	       EIP197_PE(priv) + ctrl);
> +
> + 	/* Enable access to the program memory */
> +-	writel(prog_en, priv->base + EIP197_PE_ICE_RAM_CTRL);
> ++	writel(prog_en, EIP197_PE(priv) + EIP197_PE_ICE_RAM_CTRL);
> +
> + 	/* Write the firmware */
> + 	for (i = 0; i < fw->size / sizeof(u32); i++)
> +@@ -119,12 +119,12 @@ static void eip197_write_firmware(struct safexcel_crypto_priv *priv,
> + 		       priv->base + EIP197_CLASSIFICATION_RAMS + i * sizeof(u32));
> +
> + 	/* Disable access to the program memory */
> +-	writel(0, priv->base + EIP197_PE_ICE_RAM_CTRL);
> ++	writel(0, EIP197_PE(priv) + EIP197_PE_ICE_RAM_CTRL);
> +
> + 	/* Release engine from reset */
> +-	val = readl(priv->base + ctrl);
> ++	val = readl(EIP197_PE(priv) + ctrl);
> + 	val &= ~EIP197_PE_ICE_x_CTRL_SW_RESET;
> +-	writel(val, priv->base + ctrl);
> ++	writel(val, EIP197_PE(priv) + ctrl);
> + }
> +
> + static int eip197_load_firmwares(struct safexcel_crypto_priv *priv)
> +@@ -145,14 +145,14 @@ static int eip197_load_firmwares(struct safexcel_crypto_priv *priv)
> + 	 }
> +
> + 	/* Clear the scratchpad memory */
> +-	val = readl(priv->base + EIP197_PE_ICE_SCRATCH_CTRL);
> ++	val = readl(EIP197_PE(priv) + EIP197_PE_ICE_SCRATCH_CTRL);
> + 	val |= EIP197_PE_ICE_SCRATCH_CTRL_CHANGE_TIMER |
> + 	       EIP197_PE_ICE_SCRATCH_CTRL_TIMER_EN |
> + 	       EIP197_PE_ICE_SCRATCH_CTRL_SCRATCH_ACCESS |
> + 	       EIP197_PE_ICE_SCRATCH_CTRL_CHANGE_ACCESS;
> +-	writel(val, priv->base + EIP197_PE_ICE_SCRATCH_CTRL);
> ++	writel(val, EIP197_PE(priv) + EIP197_PE_ICE_SCRATCH_CTRL);
> +
> +-	memset(priv->base + EIP197_PE_ICE_SCRATCH_RAM, 0,
> ++	memset(EIP197_PE(priv) + EIP197_PE_ICE_SCRATCH_RAM, 0,
> + 	       EIP197_NUM_OF_SCRATCH_BLOCKS * sizeof(u32));
> +
> + 	eip197_write_firmware(priv, fw[FW_IFPP], EIP197_PE_ICE_FPP_CTRL,
> +@@ -173,7 +173,7 @@ static int safexcel_hw_setup_cdesc_rings(struct safexcel_crypto_priv *priv)
> + 	u32 hdw, cd_size_rnd, val;
> + 	int i;
> +
> +-	hdw = readl(priv->base + EIP197_HIA_OPTIONS);
> ++	hdw = readl(EIP197_HIA_AIC_G(priv) + EIP197_HIA_OPTIONS);
> + 	hdw &= GENMASK(27, 25);
> + 	hdw >>= 25;
> +
> +@@ -182,26 +182,25 @@ static int safexcel_hw_setup_cdesc_rings(struct safexcel_crypto_priv *priv)
> + 	for (i = 0; i < priv->config.rings; i++) {
> + 		/* ring base address */
> + 		writel(lower_32_bits(priv->ring[i].cdr.base_dma),
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_RING_BASE_ADDR_LO);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_RING_BASE_ADDR_LO);
> + 		writel(upper_32_bits(priv->ring[i].cdr.base_dma),
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_RING_BASE_ADDR_HI);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_RING_BASE_ADDR_HI);
> +
> + 		writel(EIP197_xDR_DESC_MODE_64BIT | (priv->config.cd_offset << 16) |
> + 		       priv->config.cd_size,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_DESC_SIZE);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_DESC_SIZE);
> + 		writel(((EIP197_FETCH_COUNT * (cd_size_rnd << hdw)) << 16) |
> + 		       (EIP197_FETCH_COUNT * priv->config.cd_offset),
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_CFG);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_CFG);
> +
> + 		/* Configure DMA tx control */
> + 		val = EIP197_HIA_xDR_CFG_WR_CACHE(WR_CACHE_3BITS);
> + 		val |= EIP197_HIA_xDR_CFG_RD_CACHE(RD_CACHE_3BITS);
> +-		writel(val,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_DMA_CFG);
> ++		writel(val, EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_DMA_CFG);
> +
> + 		/* clear any pending interrupt */
> + 		writel(GENMASK(5, 0),
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_STAT);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_STAT);
> + 	}
> +
> + 	return 0;
> +@@ -212,7 +211,7 @@ static int safexcel_hw_setup_rdesc_rings(struct safexcel_crypto_priv *priv)
> + 	u32 hdw, rd_size_rnd, val;
> + 	int i;
> +
> +-	hdw = readl(priv->base + EIP197_HIA_OPTIONS);
> ++	hdw = readl(EIP197_HIA_AIC_G(priv) + EIP197_HIA_OPTIONS);
> + 	hdw &= GENMASK(27, 25);
> + 	hdw >>= 25;
> +
> +@@ -221,33 +220,33 @@ static int safexcel_hw_setup_rdesc_rings(struct safexcel_crypto_priv *priv)
> + 	for (i = 0; i < priv->config.rings; i++) {
> + 		/* ring base address */
> + 		writel(lower_32_bits(priv->ring[i].rdr.base_dma),
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_RING_BASE_ADDR_LO);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_RING_BASE_ADDR_LO);
> + 		writel(upper_32_bits(priv->ring[i].rdr.base_dma),
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_RING_BASE_ADDR_HI);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_RING_BASE_ADDR_HI);
> +
> + 		writel(EIP197_xDR_DESC_MODE_64BIT | (priv->config.rd_offset << 16) |
> + 		       priv->config.rd_size,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_DESC_SIZE);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_DESC_SIZE);
> +
> + 		writel(((EIP197_FETCH_COUNT * (rd_size_rnd << hdw)) << 16) |
> + 		       (EIP197_FETCH_COUNT * priv->config.rd_offset),
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_CFG);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_CFG);
> +
> + 		/* Configure DMA tx control */
> + 		val = EIP197_HIA_xDR_CFG_WR_CACHE(WR_CACHE_3BITS);
> + 		val |= EIP197_HIA_xDR_CFG_RD_CACHE(RD_CACHE_3BITS);
> + 		val |= EIP197_HIA_xDR_WR_RES_BUF | EIP197_HIA_xDR_WR_CTRL_BUG;
> + 		writel(val,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_DMA_CFG);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_DMA_CFG);
> +
> + 		/* clear any pending interrupt */
> + 		writel(GENMASK(7, 0),
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_STAT);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_STAT);
> +
> + 		/* enable ring interrupt */
> +-		val = readl(priv->base + EIP197_HIA_AIC_R_ENABLE_CTRL(i));
> ++		val = readl(EIP197_HIA_AIC_R(priv) + EIP197_HIA_AIC_R_ENABLE_CTRL(i));
> + 		val |= EIP197_RDR_IRQ(i);
> +-		writel(val, priv->base + EIP197_HIA_AIC_R_ENABLE_CTRL(i));
> ++		writel(val, EIP197_HIA_AIC_R(priv) + EIP197_HIA_AIC_R_ENABLE_CTRL(i));
> + 	}
> +
> + 	return 0;
> +@@ -259,39 +258,40 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	int i, ret;
> +
> + 	/* Determine endianess and configure byte swap */
> +-	version = readl(priv->base + EIP197_HIA_VERSION);
> +-	val = readl(priv->base + EIP197_HIA_MST_CTRL);
> ++	version = readl(EIP197_HIA_AIC(priv) + EIP197_HIA_VERSION);
> ++	val = readl(EIP197_HIA_AIC(priv) + EIP197_HIA_MST_CTRL);
> +
> + 	if ((version & 0xffff) == EIP197_HIA_VERSION_BE)
> + 		val |= EIP197_MST_CTRL_BYTE_SWAP;
> + 	else if (((version >> 16) & 0xffff) == EIP197_HIA_VERSION_LE)
> + 		val |= (EIP197_MST_CTRL_NO_BYTE_SWAP >> 24);
> +
> +-	writel(val, priv->base + EIP197_HIA_MST_CTRL);
> +-
> ++	writel(val, EIP197_HIA_AIC(priv) + EIP197_HIA_MST_CTRL);
> +
> + 	/* Configure wr/rd cache values */
> + 	writel(EIP197_MST_CTRL_RD_CACHE(RD_CACHE_4BITS) |
> + 	       EIP197_MST_CTRL_WD_CACHE(WR_CACHE_4BITS),
> +-	       priv->base + EIP197_MST_CTRL);
> ++	       EIP197_HIA_GEN_CFG(priv) + EIP197_MST_CTRL);
> +
> + 	/* Interrupts reset */
> +
> + 	/* Disable all global interrupts */
> +-	writel(0, priv->base + EIP197_HIA_AIC_G_ENABLE_CTRL);
> ++	writel(0, EIP197_HIA_AIC_G(priv) + EIP197_HIA_AIC_G_ENABLE_CTRL);
> +
> + 	/* Clear any pending interrupt */
> +-	writel(GENMASK(31, 0), priv->base + EIP197_HIA_AIC_G_ACK);
> ++	writel(GENMASK(31, 0), EIP197_HIA_AIC_G(priv) + EIP197_HIA_AIC_G_ACK);
> +
> + 	/* Data Fetch Engine configuration */
> +
> + 	/* Reset all DFE threads */
> + 	writel(EIP197_DxE_THR_CTRL_RESET_PE,
> +-	       priv->base + EIP197_HIA_DFE_THR_CTRL);
> ++	       EIP197_HIA_DFE_THR(priv) + EIP197_HIA_DFE_THR_CTRL);
> +
> +-	/* Reset HIA input interface arbiter */
> +-	writel(EIP197_HIA_RA_PE_CTRL_RESET,
> +-	       priv->base + EIP197_HIA_RA_PE_CTRL);
> ++	if (priv->version == EIP197) {
> ++		/* Reset HIA input interface arbiter */
> ++		writel(EIP197_HIA_RA_PE_CTRL_RESET,
> ++		       EIP197_HIA_AIC(priv) + EIP197_HIA_RA_PE_CTRL);
> ++	}
> +
> + 	/* DMA transfer size to use */
> + 	val = EIP197_HIA_DFE_CFG_DIS_DEBUG;
> +@@ -299,29 +299,32 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	val |= EIP197_HIA_DxE_CFG_MIN_CTRL_SIZE(5) | EIP197_HIA_DxE_CFG_MAX_CTRL_SIZE(7);
> + 	val |= EIP197_HIA_DxE_CFG_DATA_CACHE_CTRL(RD_CACHE_3BITS);
> + 	val |= EIP197_HIA_DxE_CFG_CTRL_CACHE_CTRL(RD_CACHE_3BITS);
> +-	writel(val, priv->base + EIP197_HIA_DFE_CFG);
> ++	writel(val, EIP197_HIA_DFE(priv) + EIP197_HIA_DFE_CFG);
> +
> + 	/* Leave the DFE threads reset state */
> +-	writel(0, priv->base + EIP197_HIA_DFE_THR_CTRL);
> ++	writel(0, EIP197_HIA_DFE_THR(priv) + EIP197_HIA_DFE_THR_CTRL);
> +
> + 	/* Configure the procesing engine thresholds */
> + 	writel(EIP197_PE_IN_xBUF_THRES_MIN(5) | EIP197_PE_IN_xBUF_THRES_MAX(9),
> +-	      priv->base + EIP197_PE_IN_DBUF_THRES);
> ++	       EIP197_PE(priv) + EIP197_PE_IN_DBUF_THRES);
> + 	writel(EIP197_PE_IN_xBUF_THRES_MIN(5) | EIP197_PE_IN_xBUF_THRES_MAX(7),
> +-	      priv->base + EIP197_PE_IN_TBUF_THRES);
> ++	       EIP197_PE(priv) + EIP197_PE_IN_TBUF_THRES);
> +
> +-	/* enable HIA input interface arbiter and rings */
> +-	writel(EIP197_HIA_RA_PE_CTRL_EN | GENMASK(priv->config.rings - 1, 0),
> +-	       priv->base + EIP197_HIA_RA_PE_CTRL);
> ++	if (priv->version == EIP197) {
> ++		/* enable HIA input interface arbiter and rings */
> ++		writel(EIP197_HIA_RA_PE_CTRL_EN |
> ++		       GENMASK(priv->config.rings - 1, 0),
> ++		       EIP197_HIA_AIC(priv) + EIP197_HIA_RA_PE_CTRL);
> ++	}
> +
> + 	/* Data Store Engine configuration */
> +
> + 	/* Reset all DSE threads */
> + 	writel(EIP197_DxE_THR_CTRL_RESET_PE,
> +-	       priv->base + EIP197_HIA_DSE_THR_CTRL);
> ++	       EIP197_HIA_DSE_THR(priv) + EIP197_HIA_DSE_THR_CTRL);
> +
> + 	/* Wait for all DSE threads to complete */
> +-	while ((readl(priv->base + EIP197_HIA_DSE_THR_STAT) &
> ++	while ((readl(EIP197_HIA_DSE_THR(priv) + EIP197_HIA_DSE_THR_STAT) &
> + 		GENMASK(15, 12)) != GENMASK(15, 12))
> + 		;
> +
> +@@ -330,15 +333,19 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	val |= EIP197_HIA_DxE_CFG_MIN_DATA_SIZE(7) | EIP197_HIA_DxE_CFG_MAX_DATA_SIZE(8);
> + 	val |= EIP197_HIA_DxE_CFG_DATA_CACHE_CTRL(WR_CACHE_3BITS);
> + 	val |= EIP197_HIA_DSE_CFG_ALLWAYS_BUFFERABLE;
> +-	val |= EIP197_HIA_DSE_CFG_EN_SINGLE_WR;
> +-	writel(val, priv->base + EIP197_HIA_DSE_CFG);
> ++	/* FIXME: instability issues can occur for EIP97 but disabling it impact
> ++	 * performances.
> ++	 */
> ++	if (priv->version == EIP197)
> ++		val |= EIP197_HIA_DSE_CFG_EN_SINGLE_WR;
> ++	writel(val, EIP197_HIA_DSE(priv) + EIP197_HIA_DSE_CFG);
> +
> + 	/* Leave the DSE threads reset state */
> +-	writel(0, priv->base + EIP197_HIA_DSE_THR_CTRL);
> ++	writel(0, EIP197_HIA_DSE_THR(priv) + EIP197_HIA_DSE_THR_CTRL);
> +
> + 	/* Configure the procesing engine thresholds */
> + 	writel(EIP197_PE_OUT_DBUF_THRES_MIN(7) | EIP197_PE_OUT_DBUF_THRES_MAX(8),
> +-	       priv->base + EIP197_PE_OUT_DBUF_THRES);
> ++	       EIP197_PE(priv) + EIP197_PE_OUT_DBUF_THRES);
> +
> + 	/* Processing Engine configuration */
> +
> +@@ -348,73 +355,75 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	val |= EIP197_ALG_AES_ECB | EIP197_ALG_AES_CBC;
> + 	val |= EIP197_ALG_SHA1 | EIP197_ALG_HMAC_SHA1;
> + 	val |= EIP197_ALG_SHA2;
> +-	writel(val, priv->base + EIP197_PE_EIP96_FUNCTION_EN);
> ++	writel(val, EIP197_PE(priv) + EIP197_PE_EIP96_FUNCTION_EN);
> +
> + 	/* Command Descriptor Rings prepare */
> + 	for (i = 0; i < priv->config.rings; i++) {
> + 		/* Clear interrupts for this ring */
> + 		writel(GENMASK(31, 0),
> +-		       priv->base + EIP197_HIA_AIC_R_ENABLE_CLR(i));
> ++		       EIP197_HIA_AIC_R(priv) + EIP197_HIA_AIC_R_ENABLE_CLR(i));
> +
> + 		/* Disable external triggering */
> +-		writel(0, priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_CFG);
> ++		writel(0, EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_CFG);
> +
> + 		/* Clear the pending prepared counter */
> + 		writel(EIP197_xDR_PREP_CLR_COUNT,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_PREP_COUNT);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_PREP_COUNT);
> +
> + 		/* Clear the pending processed counter */
> + 		writel(EIP197_xDR_PROC_CLR_COUNT,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_PROC_COUNT);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_PROC_COUNT);
> +
> + 		writel(0,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_PREP_PNTR);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_PREP_PNTR);
> + 		writel(0,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_PROC_PNTR);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_PROC_PNTR);
> +
> + 		writel((EIP197_DEFAULT_RING_SIZE * priv->config.cd_offset) << 2,
> +-		       priv->base + EIP197_HIA_CDR(i) + EIP197_HIA_xDR_RING_SIZE);
> ++		       EIP197_HIA_CDR(priv, i) + EIP197_HIA_xDR_RING_SIZE);
> + 	}
> +
> + 	/* Result Descriptor Ring prepare */
> + 	for (i = 0; i < priv->config.rings; i++) {
> + 		/* Disable external triggering*/
> +-		writel(0, priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_CFG);
> ++		writel(0, EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_CFG);
> +
> + 		/* Clear the pending prepared counter */
> + 		writel(EIP197_xDR_PREP_CLR_COUNT,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_PREP_COUNT);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_PREP_COUNT);
> +
> + 		/* Clear the pending processed counter */
> + 		writel(EIP197_xDR_PROC_CLR_COUNT,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_PROC_COUNT);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_PROC_COUNT);
> +
> + 		writel(0,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_PREP_PNTR);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_PREP_PNTR);
> + 		writel(0,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_PROC_PNTR);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_PROC_PNTR);
> +
> + 		/* Ring size */
> + 		writel((EIP197_DEFAULT_RING_SIZE * priv->config.rd_offset) << 2,
> +-		       priv->base + EIP197_HIA_RDR(i) + EIP197_HIA_xDR_RING_SIZE);
> ++		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_RING_SIZE);
> + 	}
> +
> + 	/* Enable command descriptor rings */
> + 	writel(EIP197_DxE_THR_CTRL_EN | GENMASK(priv->config.rings - 1, 0),
> +-	       priv->base + EIP197_HIA_DFE_THR_CTRL);
> ++	       EIP197_HIA_DFE_THR(priv) + EIP197_HIA_DFE_THR_CTRL);
> +
> + 	/* Enable result descriptor rings */
> + 	writel(EIP197_DxE_THR_CTRL_EN | GENMASK(priv->config.rings - 1, 0),
> +-	       priv->base + EIP197_HIA_DSE_THR_CTRL);
> ++	       EIP197_HIA_DSE_THR(priv) + EIP197_HIA_DSE_THR_CTRL);
> +
> + 	/* Clear any HIA interrupt */
> +-	writel(GENMASK(30, 20), priv->base + EIP197_HIA_AIC_G_ACK);
> ++	writel(GENMASK(30, 20), EIP197_HIA_AIC_G(priv) + EIP197_HIA_AIC_G_ACK);
> +
> +-	eip197_trc_cache_init(priv);
> ++	if (priv->version == EIP197) {
> ++		eip197_trc_cache_init(priv);
> +
> +-	ret = eip197_load_firmwares(priv);
> +-	if (ret)
> +-		return ret;
> ++		ret = eip197_load_firmwares(priv);
> ++		if (ret)
> ++			return ret;
> ++	}
> +
> + 	safexcel_hw_setup_cdesc_rings(priv);
> + 	safexcel_hw_setup_rdesc_rings(priv);
> +@@ -434,7 +443,7 @@ int safexcel_try_push_requests(struct safexcel_crypto_priv *priv, int ring,
> + 	/* Configure when we want an interrupt */
> + 	writel(EIP197_HIA_RDR_THRESH_PKT_MODE |
> + 	       EIP197_HIA_RDR_THRESH_PROC_PKT(coal),
> +-	       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_THRESH);
> ++	       EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_THRESH);
> +
> + 	return coal;
> + }
> +@@ -524,11 +533,11 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> +
> + 	/* let the RDR know we have pending descriptors */
> + 	writel((rdesc * priv->config.rd_offset) << 2,
> +-	       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PREP_COUNT);
> ++	       EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_PREP_COUNT);
> +
> + 	/* let the CDR know we have pending descriptors */
> + 	writel((cdesc * priv->config.cd_offset) << 2,
> +-	       priv->base + EIP197_HIA_CDR(ring) + EIP197_HIA_xDR_PREP_COUNT);
> ++	       EIP197_HIA_CDR(priv, ring) + EIP197_HIA_xDR_PREP_COUNT);
> + }
> +
> + void safexcel_free_context(struct safexcel_crypto_priv *priv,
> +@@ -629,7 +638,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + handle_results:
> + 	tot_descs = 0;
> +
> +-	nreq = readl(priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> ++	nreq = readl(EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_PROC_COUNT);
> + 	nreq >>= EIP197_xDR_PROC_xD_PKT_OFFSET;
> + 	nreq &= EIP197_xDR_PROC_xD_PKT_MASK;
> + 	if (!nreq)
> +@@ -665,7 +674,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + 	if (i) {
> + 		writel(EIP197_xDR_PROC_xD_PKT(i) |
> + 		       EIP197_xDR_PROC_xD_COUNT(tot_descs * priv->config.rd_offset),
> +-		       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_PROC_COUNT);
> ++		       EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_PROC_COUNT);
> + 	}
> +
> + 	/* If the number of requests overflowed the counter, try to proceed more
> +@@ -707,13 +716,13 @@ static irqreturn_t safexcel_irq_ring(int irq, void *data)
> + 	int ring = irq_data->ring, rc = IRQ_NONE;
> + 	u32 status, stat;
> +
> +-	status = readl(priv->base + EIP197_HIA_AIC_R_ENABLED_STAT(ring));
> ++	status = readl(EIP197_HIA_AIC_R(priv) + EIP197_HIA_AIC_R_ENABLED_STAT(ring));
> + 	if (!status)
> + 		return rc;
> +
> + 	/* RDR interrupts */
> + 	if (status & EIP197_RDR_IRQ(ring)) {
> +-		stat = readl(priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_STAT);
> ++		stat = readl(EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_STAT);
> +
> + 		if (unlikely(stat & EIP197_xDR_ERR)) {
> + 			/*
> +@@ -728,11 +737,11 @@ static irqreturn_t safexcel_irq_ring(int irq, void *data)
> +
> + 		/* ACK the interrupts */
> + 		writel(stat & 0xff,
> +-		       priv->base + EIP197_HIA_RDR(ring) + EIP197_HIA_xDR_STAT);
> ++		       EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_STAT);
> + 	}
> +
> + 	/* ACK the interrupts */
> +-	writel(status, priv->base + EIP197_HIA_AIC_R_ACK(ring));
> ++	writel(status, EIP197_HIA_AIC_R(priv) + EIP197_HIA_AIC_R_ACK(ring));
> +
> + 	return rc;
> + }
> +@@ -828,11 +837,11 @@ static void safexcel_configure(struct safexcel_crypto_priv *priv)
> + {
> + 	u32 val, mask;
> +
> +-	val = readl(priv->base + EIP197_HIA_OPTIONS);
> ++	val = readl(EIP197_HIA_AIC_G(priv) + EIP197_HIA_OPTIONS);
> + 	val = (val & GENMASK(27, 25)) >> 25;
> + 	mask = BIT(val) - 1;
> +
> +-	val = readl(priv->base + EIP197_HIA_OPTIONS);
> ++	val = readl(EIP197_HIA_AIC_G(priv) + EIP197_HIA_OPTIONS);
> + 	priv->config.rings = min_t(u32, val & GENMASK(3, 0), max_rings);
> +
> + 	priv->config.cd_size = (sizeof(struct safexcel_command_desc) / sizeof(u32));
> +@@ -842,6 +851,35 @@ static void safexcel_configure(struct safexcel_crypto_priv *priv)
> + 	priv->config.rd_offset = (priv->config.rd_size + mask) & ~mask;
> + }
> +
> ++static void safexcel_init_register_offsets(struct safexcel_crypto_priv *priv)
> ++{
> ++	struct safexcel_register_offsets *offsets = &priv->offsets;
> ++
> ++	if (priv->version == EIP197) {
> ++		offsets->hia_aic	= EIP197_HIA_AIC_BASE;
> ++		offsets->hia_aic_g	= EIP197_HIA_AIC_G_BASE;
> ++		offsets->hia_aic_r	= EIP197_HIA_AIC_R_BASE;
> ++		offsets->hia_aic_xdr	= EIP197_HIA_AIC_xDR_BASE;
> ++		offsets->hia_dfe	= EIP197_HIA_DFE_BASE;
> ++		offsets->hia_dfe_thr	= EIP197_HIA_DFE_THR_BASE;
> ++		offsets->hia_dse	= EIP197_HIA_DSE_BASE;
> ++		offsets->hia_dse_thr	= EIP197_HIA_DSE_THR_BASE;
> ++		offsets->hia_gen_cfg	= EIP197_HIA_GEN_CFG_BASE;
> ++		offsets->pe		= EIP197_PE_BASE;
> ++	} else {
> ++		offsets->hia_aic	= EIP97_HIA_AIC_BASE;
> ++		offsets->hia_aic_g	= EIP97_HIA_AIC_G_BASE;
> ++		offsets->hia_aic_r	= EIP97_HIA_AIC_R_BASE;
> ++		offsets->hia_aic_xdr	= EIP97_HIA_AIC_xDR_BASE;
> ++		offsets->hia_dfe	= EIP97_HIA_DFE_BASE;
> ++		offsets->hia_dfe_thr	= EIP97_HIA_DFE_THR_BASE;
> ++		offsets->hia_dse	= EIP97_HIA_DSE_BASE;
> ++		offsets->hia_dse_thr	= EIP97_HIA_DSE_THR_BASE;
> ++		offsets->hia_gen_cfg	= EIP97_HIA_GEN_CFG_BASE;
> ++		offsets->pe		= EIP97_PE_BASE;
> ++	}
> ++}
> ++
> + static int safexcel_probe(struct platform_device *pdev)
> + {
> + 	struct device *dev = &pdev->dev;
> +@@ -854,6 +892,9 @@ static int safexcel_probe(struct platform_device *pdev)
> + 		return -ENOMEM;
> +
> + 	priv->dev = dev;
> ++	priv->version = (enum safexcel_eip_version)of_device_get_match_data(dev);
> ++
> ++	safexcel_init_register_offsets(priv);
> +
> + 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
> + 	priv->base = devm_ioremap_resource(dev, res);
> +@@ -980,7 +1021,14 @@ static int safexcel_remove(struct platform_device *pdev)
> + }
> +
> + static const struct of_device_id safexcel_of_match_table[] = {
> +-	{ .compatible = "inside-secure,safexcel-eip197" },
> ++	{
> ++		.compatible = "inside-secure,safexcel-eip97",
> ++		.data = (void *)EIP97,
> ++	},
> ++	{
> ++		.compatible = "inside-secure,safexcel-eip197",
> ++		.data = (void *)EIP197,
> ++	},
> + 	{},
> + };
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index d4955abf873b..4e219c21608b 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -28,55 +28,94 @@
> + #define EIP197_GFP_FLAGS(base)	((base).flags & CRYPTO_TFM_REQ_MAY_SLEEP ? \
> + 				 GFP_KERNEL : GFP_ATOMIC)
> +
> ++/* Register base offsets */
> ++#define EIP197_HIA_AIC(priv)		((priv)->base + (priv)->offsets.hia_aic)
> ++#define EIP197_HIA_AIC_G(priv)		((priv)->base + (priv)->offsets.hia_aic_g)
> ++#define EIP197_HIA_AIC_R(priv)		((priv)->base + (priv)->offsets.hia_aic_r)
> ++#define EIP197_HIA_AIC_xDR(priv)	((priv)->base + (priv)->offsets.hia_aic_xdr)
> ++#define EIP197_HIA_DFE(priv)		((priv)->base + (priv)->offsets.hia_dfe)
> ++#define EIP197_HIA_DFE_THR(priv)	((priv)->base + (priv)->offsets.hia_dfe_thr)
> ++#define EIP197_HIA_DSE(priv)		((priv)->base + (priv)->offsets.hia_dse)
> ++#define EIP197_HIA_DSE_THR(priv)	((priv)->base + (priv)->offsets.hia_dse_thr)
> ++#define EIP197_HIA_GEN_CFG(priv)	((priv)->base + (priv)->offsets.hia_gen_cfg)
> ++#define EIP197_PE(priv)			((priv)->base + (priv)->offsets.pe)
> ++
> ++/* EIP197 base offsets */
> ++#define EIP197_HIA_AIC_BASE		0x90000
> ++#define EIP197_HIA_AIC_G_BASE		0x90000
> ++#define EIP197_HIA_AIC_R_BASE		0x90800
> ++#define EIP197_HIA_AIC_xDR_BASE		0x80000
> ++#define EIP197_HIA_DFE_BASE		0x8c000
> ++#define EIP197_HIA_DFE_THR_BASE		0x8c040
> ++#define EIP197_HIA_DSE_BASE		0x8d000
> ++#define EIP197_HIA_DSE_THR_BASE		0x8d040
> ++#define EIP197_HIA_GEN_CFG_BASE		0xf0000
> ++#define EIP197_PE_BASE			0xa0000
> ++
> ++/* EIP97 base offsets */
> ++#define EIP97_HIA_AIC_BASE		0x0
> ++#define EIP97_HIA_AIC_G_BASE		0x0
> ++#define EIP97_HIA_AIC_R_BASE		0x0
> ++#define EIP97_HIA_AIC_xDR_BASE		0x0
> ++#define EIP97_HIA_DFE_BASE		0xf000
> ++#define EIP97_HIA_DFE_THR_BASE		0xf200
> ++#define EIP97_HIA_DSE_BASE		0xf400
> ++#define EIP97_HIA_DSE_THR_BASE		0xf600
> ++#define EIP97_HIA_GEN_CFG_BASE		0x10000
> ++#define EIP97_PE_BASE			0x10000
> ++
> + /* CDR/RDR register offsets */
> +-#define EIP197_HIA_xDR_OFF(r)			(0x80000 + (r) * 0x1000)
> +-#define EIP197_HIA_CDR(r)			(EIP197_HIA_xDR_OFF(r))
> +-#define EIP197_HIA_RDR(r)			(EIP197_HIA_xDR_OFF(r) + 0x800)
> +-#define EIP197_HIA_xDR_RING_BASE_ADDR_LO	0x0
> +-#define EIP197_HIA_xDR_RING_BASE_ADDR_HI	0x4
> +-#define EIP197_HIA_xDR_RING_SIZE		0x18
> +-#define EIP197_HIA_xDR_DESC_SIZE		0x1c
> +-#define EIP197_HIA_xDR_CFG			0x20
> +-#define EIP197_HIA_xDR_DMA_CFG			0x24
> +-#define EIP197_HIA_xDR_THRESH			0x28
> +-#define EIP197_HIA_xDR_PREP_COUNT		0x2c
> +-#define EIP197_HIA_xDR_PROC_COUNT		0x30
> +-#define EIP197_HIA_xDR_PREP_PNTR		0x34
> +-#define EIP197_HIA_xDR_PROC_PNTR		0x38
> +-#define EIP197_HIA_xDR_STAT			0x3c
> ++#define EIP197_HIA_xDR_OFF(priv, r)		(EIP197_HIA_AIC_xDR(priv) + (r) * 0x1000)
> ++#define EIP197_HIA_CDR(priv, r)			(EIP197_HIA_xDR_OFF(priv, r))
> ++#define EIP197_HIA_RDR(priv, r)			(EIP197_HIA_xDR_OFF(priv, r) + 0x800)
> ++#define EIP197_HIA_xDR_RING_BASE_ADDR_LO	0x0000
> ++#define EIP197_HIA_xDR_RING_BASE_ADDR_HI	0x0004
> ++#define EIP197_HIA_xDR_RING_SIZE		0x0018
> ++#define EIP197_HIA_xDR_DESC_SIZE		0x001c
> ++#define EIP197_HIA_xDR_CFG			0x0020
> ++#define EIP197_HIA_xDR_DMA_CFG			0x0024
> ++#define EIP197_HIA_xDR_THRESH			0x0028
> ++#define EIP197_HIA_xDR_PREP_COUNT		0x002c
> ++#define EIP197_HIA_xDR_PROC_COUNT		0x0030
> ++#define EIP197_HIA_xDR_PREP_PNTR		0x0034
> ++#define EIP197_HIA_xDR_PROC_PNTR		0x0038
> ++#define EIP197_HIA_xDR_STAT			0x003c
> +
> + /* register offsets */
> +-#define EIP197_HIA_DFE_CFG			0x8c000
> +-#define EIP197_HIA_DFE_THR_CTRL			0x8c040
> +-#define EIP197_HIA_DFE_THR_STAT			0x8c044
> +-#define EIP197_HIA_DSE_CFG			0x8d000
> +-#define EIP197_HIA_DSE_THR_CTRL			0x8d040
> +-#define EIP197_HIA_DSE_THR_STAT			0x8d044
> +-#define EIP197_HIA_RA_PE_CTRL			0x90010
> +-#define EIP197_HIA_RA_PE_STAT			0x90014
> ++#define EIP197_HIA_DFE_CFG			0x0000
> ++#define EIP197_HIA_DFE_THR_CTRL			0x0000
> ++#define EIP197_HIA_DFE_THR_STAT			0x0004
> ++#define EIP197_HIA_DSE_CFG			0x0000
> ++#define EIP197_HIA_DSE_THR_CTRL			0x0000
> ++#define EIP197_HIA_DSE_THR_STAT			0x0004
> ++#define EIP197_HIA_RA_PE_CTRL			0x0010
> ++#define EIP197_HIA_RA_PE_STAT			0x0014
> + #define EIP197_HIA_AIC_R_OFF(r)			((r) * 0x1000)
> +-#define EIP197_HIA_AIC_R_ENABLE_CTRL(r)		(0x9e808 - EIP197_HIA_AIC_R_OFF(r))
> +-#define EIP197_HIA_AIC_R_ENABLED_STAT(r)	(0x9e810 - EIP197_HIA_AIC_R_OFF(r))
> +-#define EIP197_HIA_AIC_R_ACK(r)			(0x9e810 - EIP197_HIA_AIC_R_OFF(r))
> +-#define EIP197_HIA_AIC_R_ENABLE_CLR(r)		(0x9e814 - EIP197_HIA_AIC_R_OFF(r))
> +-#define EIP197_HIA_AIC_G_ENABLE_CTRL		0x9f808
> +-#define EIP197_HIA_AIC_G_ENABLED_STAT		0x9f810
> +-#define EIP197_HIA_AIC_G_ACK			0x9f810
> +-#define EIP197_HIA_MST_CTRL			0x9fff4
> +-#define EIP197_HIA_OPTIONS			0x9fff8
> +-#define EIP197_HIA_VERSION			0x9fffc
> +-#define EIP197_PE_IN_DBUF_THRES			0xa0000
> +-#define EIP197_PE_IN_TBUF_THRES			0xa0100
> +-#define EIP197_PE_ICE_SCRATCH_RAM		0xa0800
> +-#define EIP197_PE_ICE_PUE_CTRL			0xa0c80
> +-#define EIP197_PE_ICE_SCRATCH_CTRL		0xa0d04
> +-#define EIP197_PE_ICE_FPP_CTRL			0xa0d80
> +-#define EIP197_PE_ICE_RAM_CTRL			0xa0ff0
> +-#define EIP197_PE_EIP96_FUNCTION_EN		0xa1004
> +-#define EIP197_PE_EIP96_CONTEXT_CTRL		0xa1008
> +-#define EIP197_PE_EIP96_CONTEXT_STAT		0xa100c
> +-#define EIP197_PE_OUT_DBUF_THRES		0xa1c00
> +-#define EIP197_PE_OUT_TBUF_THRES		0xa1d00
> ++#define EIP197_HIA_AIC_R_ENABLE_CTRL(r)		(0xe008 - EIP197_HIA_AIC_R_OFF(r))
> ++#define EIP197_HIA_AIC_R_ENABLED_STAT(r)	(0xe010 - EIP197_HIA_AIC_R_OFF(r))
> ++#define EIP197_HIA_AIC_R_ACK(r)			(0xe010 - EIP197_HIA_AIC_R_OFF(r))
> ++#define EIP197_HIA_AIC_R_ENABLE_CLR(r)		(0xe014 - EIP197_HIA_AIC_R_OFF(r))
> ++#define EIP197_HIA_AIC_G_ENABLE_CTRL		0xf808
> ++#define EIP197_HIA_AIC_G_ENABLED_STAT		0xf810
> ++#define EIP197_HIA_AIC_G_ACK			0xf810
> ++#define EIP197_HIA_MST_CTRL			0xfff4
> ++#define EIP197_HIA_OPTIONS			0xfff8
> ++#define EIP197_HIA_VERSION			0xfffc
> ++#define EIP197_PE_IN_DBUF_THRES			0x0000
> ++#define EIP197_PE_IN_TBUF_THRES			0x0100
> ++#define EIP197_PE_ICE_SCRATCH_RAM		0x0800
> ++#define EIP197_PE_ICE_PUE_CTRL			0x0c80
> ++#define EIP197_PE_ICE_SCRATCH_CTRL		0x0d04
> ++#define EIP197_PE_ICE_FPP_CTRL			0x0d80
> ++#define EIP197_PE_ICE_RAM_CTRL			0x0ff0
> ++#define EIP197_PE_EIP96_FUNCTION_EN		0x1004
> ++#define EIP197_PE_EIP96_CONTEXT_CTRL		0x1008
> ++#define EIP197_PE_EIP96_CONTEXT_STAT		0x100c
> ++#define EIP197_PE_OUT_DBUF_THRES		0x1c00
> ++#define EIP197_PE_OUT_TBUF_THRES		0x1d00
> ++#define EIP197_MST_CTRL				0xfff4
> ++
> ++/* EIP197-specific registers, no indirection */
> + #define EIP197_CLASSIFICATION_RAMS		0xe0000
> + #define EIP197_TRC_CTRL				0xf0800
> + #define EIP197_TRC_LASTRES			0xf0804
> +@@ -90,7 +129,6 @@
> + #define EIP197_TRC_ECCDATASTAT			0xf083c
> + #define EIP197_TRC_ECCDATA			0xf0840
> + #define EIP197_CS_RAM_CTRL			0xf7ff0
> +-#define EIP197_MST_CTRL				0xffff4
> +
> + /* EIP197_HIA_xDR_DESC_SIZE */
> + #define EIP197_xDR_DESC_MODE_64BIT		BIT(31)
> +@@ -465,12 +503,33 @@ struct safexcel_work_data {
> + 	int ring;
> + };
> +
> ++enum safexcel_eip_version {
> ++	EIP97,
> ++	EIP197,
> ++};
> ++
> ++struct safexcel_register_offsets {
> ++	u32 hia_aic;
> ++	u32 hia_aic_g;
> ++	u32 hia_aic_r;
> ++	u32 hia_aic_xdr;
> ++	u32 hia_dfe;
> ++	u32 hia_dfe_thr;
> ++	u32 hia_dse;
> ++	u32 hia_dse_thr;
> ++	u32 hia_gen_cfg;
> ++	u32 pe;
> ++};
> ++
> + struct safexcel_crypto_priv {
> + 	void __iomem *base;
> + 	struct device *dev;
> + 	struct clk *clk;
> + 	struct safexcel_config config;
> +
> ++	enum safexcel_eip_version version;
> ++	struct safexcel_register_offsets offsets;
> ++
> + 	/* context DMA pool */
> + 	struct dma_pool *context_pool;
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
> +index 7c9a2d87135b..17a7725a6f6d 100644
> +--- a/drivers/crypto/inside-secure/safexcel_cipher.c
> ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
> +@@ -69,6 +69,7 @@ static int safexcel_aes_setkey(struct crypto_skcipher *ctfm, const u8 *key,
> + {
> + 	struct crypto_tfm *tfm = crypto_skcipher_tfm(ctfm);
> + 	struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(tfm);
> ++	struct safexcel_crypto_priv *priv = ctx->priv;
> + 	struct crypto_aes_ctx aes;
> + 	int ret, i;
> +
> +@@ -78,7 +79,7 @@ static int safexcel_aes_setkey(struct crypto_skcipher *ctfm, const u8 *key,
> + 		return ret;
> + 	}
> +
> +-	if (ctx->base.ctxr_dma) {
> ++	if (priv->version == EIP197 && ctx->base.ctxr_dma) {
> + 		for (i = 0; i < len / sizeof(u32); i++) {
> + 			if (ctx->key[i] != cpu_to_le32(aes.key_enc[i])) {
> + 				ctx->base.needs_inv = true;
> +@@ -411,9 +412,13 @@ static int safexcel_send(struct crypto_async_request *async,
> + 			 int *commands, int *results)
> + {
> + 	struct skcipher_request *req = skcipher_request_cast(async);
> ++	struct safexcel_cipher_ctx *ctx = crypto_tfm_ctx(req->base.tfm);
> + 	struct safexcel_cipher_req *sreq = skcipher_request_ctx(req);
> ++	struct safexcel_crypto_priv *priv = ctx->priv;
> + 	int ret;
> +
> ++	BUG_ON(priv->version == EIP97 && sreq->needs_inv);
> ++
> + 	if (sreq->needs_inv)
> + 		ret = safexcel_cipher_send_inv(async, ring, request,
> + 					       commands, results);
> +@@ -476,7 +481,7 @@ static int safexcel_aes(struct skcipher_request *req,
> + 	ctx->mode = mode;
> +
> + 	if (ctx->base.ctxr) {
> +-		if (ctx->base.needs_inv) {
> ++		if (priv->version == EIP197 && ctx->base.needs_inv) {
> + 			sreq->needs_inv = true;
> + 			ctx->base.needs_inv = false;
> + 		}
> +@@ -544,9 +549,14 @@ static void safexcel_skcipher_cra_exit(struct crypto_tfm *tfm)
> +
> + 	memzero_explicit(ctx->base.ctxr->data, 8 * sizeof(u32));
> +
> +-	ret = safexcel_cipher_exit_inv(tfm);
> +-	if (ret)
> +-		dev_warn(priv->dev, "cipher: invalidation error %d\n", ret);
> ++	if (priv->version == EIP197) {
> ++		ret = safexcel_cipher_exit_inv(tfm);
> ++		if (ret)
> ++			dev_warn(priv->dev, "cipher: invalidation error %d\n", ret);
> ++	} else {
> ++		dma_pool_free(priv->context_pool, ctx->base.ctxr,
> ++			      ctx->base.ctxr_dma);
> ++	}
> + }
> +
> + struct safexcel_alg_template safexcel_alg_ecb_aes = {
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 6912c032200b..b9a2bfd91c20 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -415,6 +415,8 @@ static int safexcel_handle_result(struct safexcel_crypto_priv *priv, int ring,
> + 	struct safexcel_ahash_req *req = ahash_request_ctx(areq);
> + 	int err;
> +
> ++	BUG_ON(priv->version == EIP97 && req->needs_inv);
> ++
> + 	if (req->needs_inv) {
> + 		req->needs_inv = false;
> + 		err = safexcel_handle_inv_result(priv, ring, async,
> +@@ -536,7 +538,8 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq)
> + 	req->needs_inv = false;
> +
> + 	if (ctx->base.ctxr) {
> +-		if (!ctx->base.needs_inv && req->processed &&
> ++		if (priv->version == EIP197 &&
> ++		    !ctx->base.needs_inv && req->processed &&
> + 		    ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED)
> + 			/* We're still setting needs_inv here, even though it is
> + 			 * cleared right away, because the needs_inv flag can be
> +@@ -729,9 +732,14 @@ static void safexcel_ahash_cra_exit(struct crypto_tfm *tfm)
> + 	if (!ctx->base.ctxr)
> + 		return;
> +
> +-	ret = safexcel_ahash_exit_inv(tfm);
> +-	if (ret)
> +-		dev_warn(priv->dev, "hash: invalidation error %d\n", ret);
> ++	if (priv->version == EIP197) {
> ++		ret = safexcel_ahash_exit_inv(tfm);
> ++		if (ret)
> ++			dev_warn(priv->dev, "hash: invalidation error %d\n", ret);
> ++	} else {
> ++		dma_pool_free(priv->context_pool, ctx->base.ctxr,
> ++			      ctx->base.ctxr_dma);
> ++	}
> + }
> +
> + struct safexcel_alg_template safexcel_alg_sha1 = {
> +@@ -935,6 +943,7 @@ static int safexcel_hmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
> + 				     unsigned int keylen)
> + {
> + 	struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
> ++	struct safexcel_crypto_priv *priv = ctx->priv;
> + 	struct safexcel_ahash_export_state istate, ostate;
> + 	int ret, i;
> +
> +@@ -942,7 +951,7 @@ static int safexcel_hmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
> + 	if (ret)
> + 		return ret;
> +
> +-	if (ctx->base.ctxr) {
> ++	if (priv->version == EIP197 && ctx->base.ctxr) {
> + 		for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++) {
> + 			if (ctx->ipad[i] != le32_to_cpu(istate.state[i]) ||
> + 			    ctx->opad[i] != le32_to_cpu(ostate.state[i])) {
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/617-crypto-inside-secure-make-function-safexcel_try_push.patch b/target/linux/mvebu/patches-4.14/617-crypto-inside-secure-make-function-safexcel_try_push.patch
> new file mode 100644
> index 0000000000..e05e0484c8
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/617-crypto-inside-secure-make-function-safexcel_try_push.patch
> @@ -0,0 +1,38 @@
> +From a7586ad9713b6eafecb6e6724c75a4c4071c9a23 Mon Sep 17 00:00:00 2001
> +From: Colin Ian King <colin.king at canonical.com>
> +Date: Tue, 16 Jan 2018 08:41:58 +0100
> +Subject: [PATCH 18/36] crypto: inside-secure - make function
> + safexcel_try_push_requests static
> +
> +The function safexcel_try_push_requests  is local to the source and does
> +not need to be in global scope, so make it static.
> +
> +Cleans up sparse warning:
> +symbol 'safexcel_try_push_requests' was not declared. Should it be static?
> +
> +Signed-off-by: Colin Ian King <colin.king at canonical.com>
> +[Antoine: fixed alignment]
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 4 ++--
> + 1 file changed, 2 insertions(+), 2 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index b0787f5f62ad..46b691aae475 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -432,8 +432,8 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + }
> +
> + /* Called with ring's lock taken */
> +-int safexcel_try_push_requests(struct safexcel_crypto_priv *priv, int ring,
> +-			       int reqs)
> ++static int safexcel_try_push_requests(struct safexcel_crypto_priv *priv,
> ++				      int ring, int reqs)
> + {
> + 	int coal = min_t(int, reqs, EIP197_MAX_BATCH_SZ);
> +
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/618-crypto-inside-secure-do-not-overwrite-the-threshold-.patch b/target/linux/mvebu/patches-4.14/618-crypto-inside-secure-do-not-overwrite-the-threshold-.patch
> new file mode 100644
> index 0000000000..f2c62b02eb
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/618-crypto-inside-secure-do-not-overwrite-the-threshold-.patch
> @@ -0,0 +1,40 @@
> +From e0080b4984f85b8cb0b824528b562d3b126e7607 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Tue, 13 Feb 2018 09:26:51 +0100
> +Subject: [PATCH 19/36] crypto: inside-secure - do not overwrite the threshold
> + value
> +
> +This patch fixes the Inside Secure SafeXcel driver not to overwrite the
> +interrupt threshold value. In certain cases the value of this register,
> +which controls when to fire an interrupt, was overwritten. This lead to
> +packet not being processed or acked as the driver never was aware of
> +their completion.
> +
> +This patch fixes this behaviour by not setting the threshold when
> +requests are being processed by the engine.
> +
> +Fixes: dc7e28a3286e ("crypto: inside-secure - dequeue all requests at once")
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 3 +--
> + 1 file changed, 1 insertion(+), 2 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 46b691aae475..f4a76971b4ac 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -523,8 +523,7 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> +
> + 	if (!priv->ring[ring].busy) {
> + 		nreq -= safexcel_try_push_requests(priv, ring, nreq);
> +-		if (nreq)
> +-			priv->ring[ring].busy = true;
> ++		priv->ring[ring].busy = true;
> + 	}
> +
> + 	priv->ring[ring].requests_left += nreq;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/619-crypto-inside-secure-keep-the-requests-push-pop-sync.patch b/target/linux/mvebu/patches-4.14/619-crypto-inside-secure-keep-the-requests-push-pop-sync.patch
> new file mode 100644
> index 0000000000..414b005810
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/619-crypto-inside-secure-keep-the-requests-push-pop-sync.patch
> @@ -0,0 +1,136 @@
> +From c5caee0a5c026f0c3a9605a113e16cd691d3428f Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Tue, 13 Feb 2018 09:26:56 +0100
> +Subject: [PATCH 20/36] crypto: inside-secure - keep the requests push/pop
> + synced
> +
> +This patch updates the Inside Secure SafeXcel driver to avoid being
> +out-of-sync between the number of requests sent and the one being
> +completed.
> +
> +The number of requests acknowledged by the driver can be different than
> +the threshold that was configured if new requests were being pushed to
> +the h/w in the meantime. The driver wasn't taking those into account,
> +and the number of remaining requests to handled (to reconfigure the
> +interrupt threshold) could be out-of sync.
> +
> +This patch fixes it by not taking in account the number of requests
> +left, but by taking in account the total number of requests being sent
> +to the hardware, so that new requests are being taken into account.
> +
> +Fixes: dc7e28a3286e ("crypto: inside-secure - dequeue all requests at once")
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 28 +++++++++++++---------------
> + drivers/crypto/inside-secure/safexcel.h |  6 ++----
> + 2 files changed, 15 insertions(+), 19 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index f4a76971b4ac..fe1f55c3e501 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -432,20 +432,18 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + }
> +
> + /* Called with ring's lock taken */
> +-static int safexcel_try_push_requests(struct safexcel_crypto_priv *priv,
> +-				      int ring, int reqs)
> ++static void safexcel_try_push_requests(struct safexcel_crypto_priv *priv,
> ++				       int ring)
> + {
> +-	int coal = min_t(int, reqs, EIP197_MAX_BATCH_SZ);
> ++	int coal = min_t(int, priv->ring[ring].requests, EIP197_MAX_BATCH_SZ);
> +
> + 	if (!coal)
> +-		return 0;
> ++		return;
> +
> + 	/* Configure when we want an interrupt */
> + 	writel(EIP197_HIA_RDR_THRESH_PKT_MODE |
> + 	       EIP197_HIA_RDR_THRESH_PROC_PKT(coal),
> + 	       EIP197_HIA_RDR(priv, ring) + EIP197_HIA_xDR_THRESH);
> +-
> +-	return coal;
> + }
> +
> + void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> +@@ -521,13 +519,13 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> +
> + 	spin_lock_bh(&priv->ring[ring].egress_lock);
> +
> ++	priv->ring[ring].requests += nreq;
> ++
> + 	if (!priv->ring[ring].busy) {
> +-		nreq -= safexcel_try_push_requests(priv, ring, nreq);
> ++		safexcel_try_push_requests(priv, ring);
> + 		priv->ring[ring].busy = true;
> + 	}
> +
> +-	priv->ring[ring].requests_left += nreq;
> +-
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +
> + 	/* let the RDR know we have pending descriptors */
> +@@ -631,7 +629,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + {
> + 	struct safexcel_request *sreq;
> + 	struct safexcel_context *ctx;
> +-	int ret, i, nreq, ndesc, tot_descs, done;
> ++	int ret, i, nreq, ndesc, tot_descs, handled = 0;
> + 	bool should_complete;
> +
> + handle_results:
> +@@ -667,6 +665,7 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> +
> + 		kfree(sreq);
> + 		tot_descs += ndesc;
> ++		handled++;
> + 	}
> +
> + acknowledge:
> +@@ -685,11 +684,10 @@ static inline void safexcel_handle_result_descriptor(struct safexcel_crypto_priv
> + requests_left:
> + 	spin_lock_bh(&priv->ring[ring].egress_lock);
> +
> +-	done = safexcel_try_push_requests(priv, ring,
> +-					  priv->ring[ring].requests_left);
> ++	priv->ring[ring].requests -= handled;
> ++	safexcel_try_push_requests(priv, ring);
> +
> +-	priv->ring[ring].requests_left -= done;
> +-	if (!done && !priv->ring[ring].requests_left)
> ++	if (!priv->ring[ring].requests)
> + 		priv->ring[ring].busy = false;
> +
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +@@ -970,7 +968,7 @@ static int safexcel_probe(struct platform_device *pdev)
> + 			goto err_clk;
> + 		}
> +
> +-		priv->ring[i].requests_left = 0;
> ++		priv->ring[i].requests = 0;
> + 		priv->ring[i].busy = false;
> +
> + 		crypto_init_queue(&priv->ring[i].queue,
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 4e219c21608b..caaf6a81b162 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -551,10 +551,8 @@ struct safexcel_crypto_priv {
> + 		struct crypto_queue queue;
> + 		spinlock_t queue_lock;
> +
> +-		/* Number of requests in the engine that needs the threshold
> +-		 * interrupt to be set up.
> +-		 */
> +-		int requests_left;
> ++		/* Number of requests in the engine. */
> ++		int requests;
> +
> + 		/* The ring is currently handling at least one request */
> + 		bool busy;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/620-crypto-inside-secure-unmap-the-result-in-the-hash-se.patch b/target/linux/mvebu/patches-4.14/620-crypto-inside-secure-unmap-the-result-in-the-hash-se.patch
> new file mode 100644
> index 0000000000..1404188e1b
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/620-crypto-inside-secure-unmap-the-result-in-the-hash-se.patch
> @@ -0,0 +1,42 @@
> +From 832e97cc80a5bf9992edbc6178de62d63ef9ad77 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Tue, 13 Feb 2018 09:26:57 +0100
> +Subject: [PATCH 21/36] crypto: inside-secure - unmap the result in the hash
> + send error path
> +
> +This patch adds a label to unmap the result buffer in the hash send
> +function error path.
> +
> +Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 4 +++-
> + 1 file changed, 3 insertions(+), 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index b9a2bfd91c20..7b181bd6959f 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -303,7 +303,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 				   req->state_sz);
> + 	if (IS_ERR(rdesc)) {
> + 		ret = PTR_ERR(rdesc);
> +-		goto cdesc_rollback;
> ++		goto unmap_result;
> + 	}
> +
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +@@ -315,6 +315,8 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 	*results = 1;
> + 	return 0;
> +
> ++unmap_result:
> ++	dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
> + cdesc_rollback:
> + 	for (i = 0; i < n_cdesc; i++)
> + 		safexcel_ring_rollback_wptr(priv, &priv->ring[ring].cdr);
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/621-crypto-inside-secure-move-hash-result-dma-mapping-to.patch b/target/linux/mvebu/patches-4.14/621-crypto-inside-secure-move-hash-result-dma-mapping-to.patch
> new file mode 100644
> index 0000000000..2254286906
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/621-crypto-inside-secure-move-hash-result-dma-mapping-to.patch
> @@ -0,0 +1,115 @@
> +From a7476abde5ff9f8b7c6eaa2df0e2b8aadba4a705 Mon Sep 17 00:00:00 2001
> +From: Ofer Heifetz <oferh at marvell.com>
> +Date: Mon, 26 Feb 2018 14:45:10 +0100
> +Subject: [PATCH 22/36] crypto: inside-secure - move hash result dma mapping to
> + request
> +
> +In heavy traffic the DMA mapping is overwritten by multiple requests as
> +the DMA address is stored in a global context. This patch moves this
> +information to the per-hash request context so that it can't be
> +overwritten.
> +
> +Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
> +Signed-off-by: Ofer Heifetz <oferh at marvell.com>
> +[Antoine: rebased the patch, small fixes, commit message.]
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c      |  7 +------
> + drivers/crypto/inside-secure/safexcel.h      |  4 +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 17 ++++++++++++-----
> + 3 files changed, 14 insertions(+), 14 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index fe1f55c3e501..97cd64041189 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -538,15 +538,10 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + }
> +
> + void safexcel_free_context(struct safexcel_crypto_priv *priv,
> +-			   struct crypto_async_request *req,
> +-			   int result_sz)
> ++			   struct crypto_async_request *req)
> + {
> + 	struct safexcel_context *ctx = crypto_tfm_ctx(req->tfm);
> +
> +-	if (ctx->result_dma)
> +-		dma_unmap_single(priv->dev, ctx->result_dma, result_sz,
> +-				 DMA_FROM_DEVICE);
> +-
> + 	if (ctx->cache) {
> + 		dma_unmap_single(priv->dev, ctx->cache_dma, ctx->cache_sz,
> + 				 DMA_TO_DEVICE);
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index caaf6a81b162..4e14c7e730c4 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -580,7 +580,6 @@ struct safexcel_context {
> + 	bool exit_inv;
> +
> + 	/* Used for ahash requests */
> +-	dma_addr_t result_dma;
> + 	void *cache;
> + 	dma_addr_t cache_dma;
> + 	unsigned int cache_sz;
> +@@ -608,8 +607,7 @@ struct safexcel_inv_result {
> + void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring);
> + void safexcel_complete(struct safexcel_crypto_priv *priv, int ring);
> + void safexcel_free_context(struct safexcel_crypto_priv *priv,
> +-				  struct crypto_async_request *req,
> +-				  int result_sz);
> ++				  struct crypto_async_request *req);
> + int safexcel_invalidate_cache(struct crypto_async_request *async,
> + 			      struct safexcel_crypto_priv *priv,
> + 			      dma_addr_t ctxr_dma, int ring,
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 7b181bd6959f..a31837f3b506 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -34,6 +34,7 @@ struct safexcel_ahash_req {
> + 	bool needs_inv;
> +
> + 	int nents;
> ++	dma_addr_t result_dma;
> +
> + 	u8 state_sz;    /* expected sate size, only set once */
> + 	u32 state[SHA256_DIGEST_SIZE / sizeof(u32)] __aligned(sizeof(u32));
> +@@ -158,7 +159,13 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
> + 		sreq->nents = 0;
> + 	}
> +
> +-	safexcel_free_context(priv, async, sreq->state_sz);
> ++	if (sreq->result_dma) {
> ++		dma_unmap_single(priv->dev, sreq->result_dma, sreq->state_sz,
> ++				 DMA_FROM_DEVICE);
> ++		sreq->result_dma = 0;
> ++	}
> ++
> ++	safexcel_free_context(priv, async);
> +
> + 	cache_len = sreq->len - sreq->processed;
> + 	if (cache_len)
> +@@ -291,15 +298,15 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 	/* Add the token */
> + 	safexcel_hash_token(first_cdesc, len, req->state_sz);
> +
> +-	ctx->base.result_dma = dma_map_single(priv->dev, req->state,
> +-					      req->state_sz, DMA_FROM_DEVICE);
> +-	if (dma_mapping_error(priv->dev, ctx->base.result_dma)) {
> ++	req->result_dma = dma_map_single(priv->dev, req->state, req->state_sz,
> ++					 DMA_FROM_DEVICE);
> ++	if (dma_mapping_error(priv->dev, req->result_dma)) {
> + 		ret = -EINVAL;
> + 		goto cdesc_rollback;
> + 	}
> +
> + 	/* Add a result descriptor */
> +-	rdesc = safexcel_add_rdesc(priv, ring, 1, 1, ctx->base.result_dma,
> ++	rdesc = safexcel_add_rdesc(priv, ring, 1, 1, req->result_dma,
> + 				   req->state_sz);
> + 	if (IS_ERR(rdesc)) {
> + 		ret = PTR_ERR(rdesc);
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/622-crypto-inside-secure-move-cache-result-dma-mapping-t.patch b/target/linux/mvebu/patches-4.14/622-crypto-inside-secure-move-cache-result-dma-mapping-t.patch
> new file mode 100644
> index 0000000000..c3b18af150
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/622-crypto-inside-secure-move-cache-result-dma-mapping-t.patch
> @@ -0,0 +1,152 @@
> +From 4ec1e0702a31576a91eb6cd9ace703ae7acd99c1 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 26 Feb 2018 14:45:11 +0100
> +Subject: [PATCH 23/36] crypto: inside-secure - move cache result dma mapping
> + to request
> +
> +In heavy traffic the DMA mapping is overwritten by multiple requests as
> +the DMA address is stored in a global context. This patch moves this
> +information to the per-hash request context so that it can't be
> +overwritten.
> +
> +Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c      | 14 ----------
> + drivers/crypto/inside-secure/safexcel.h      |  7 -----
> + drivers/crypto/inside-secure/safexcel_hash.c | 42 ++++++++++++----------------
> + 3 files changed, 18 insertions(+), 45 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 97cd64041189..09adeaa0da6b 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -537,20 +537,6 @@ void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring)
> + 	       EIP197_HIA_CDR(priv, ring) + EIP197_HIA_xDR_PREP_COUNT);
> + }
> +
> +-void safexcel_free_context(struct safexcel_crypto_priv *priv,
> +-			   struct crypto_async_request *req)
> +-{
> +-	struct safexcel_context *ctx = crypto_tfm_ctx(req->tfm);
> +-
> +-	if (ctx->cache) {
> +-		dma_unmap_single(priv->dev, ctx->cache_dma, ctx->cache_sz,
> +-				 DMA_TO_DEVICE);
> +-		kfree(ctx->cache);
> +-		ctx->cache = NULL;
> +-		ctx->cache_sz = 0;
> +-	}
> +-}
> +-
> + void safexcel_complete(struct safexcel_crypto_priv *priv, int ring)
> + {
> + 	struct safexcel_command_desc *cdesc;
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 4e14c7e730c4..d8dff65fc311 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -578,11 +578,6 @@ struct safexcel_context {
> + 	int ring;
> + 	bool needs_inv;
> + 	bool exit_inv;
> +-
> +-	/* Used for ahash requests */
> +-	void *cache;
> +-	dma_addr_t cache_dma;
> +-	unsigned int cache_sz;
> + };
> +
> + /*
> +@@ -606,8 +601,6 @@ struct safexcel_inv_result {
> +
> + void safexcel_dequeue(struct safexcel_crypto_priv *priv, int ring);
> + void safexcel_complete(struct safexcel_crypto_priv *priv, int ring);
> +-void safexcel_free_context(struct safexcel_crypto_priv *priv,
> +-				  struct crypto_async_request *req);
> + int safexcel_invalidate_cache(struct crypto_async_request *async,
> + 			      struct safexcel_crypto_priv *priv,
> + 			      dma_addr_t ctxr_dma, int ring,
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index a31837f3b506..9703a4063cfc 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -43,6 +43,9 @@ struct safexcel_ahash_req {
> + 	u64 processed;
> +
> + 	u8 cache[SHA256_BLOCK_SIZE] __aligned(sizeof(u32));
> ++	dma_addr_t cache_dma;
> ++	unsigned int cache_sz;
> ++
> + 	u8 cache_next[SHA256_BLOCK_SIZE] __aligned(sizeof(u32));
> + };
> +
> +@@ -165,7 +168,11 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
> + 		sreq->result_dma = 0;
> + 	}
> +
> +-	safexcel_free_context(priv, async);
> ++	if (sreq->cache_dma) {
> ++		dma_unmap_single(priv->dev, sreq->cache_dma, sreq->cache_sz,
> ++				 DMA_TO_DEVICE);
> ++		sreq->cache_dma = 0;
> ++	}
> +
> + 	cache_len = sreq->len - sreq->processed;
> + 	if (cache_len)
> +@@ -227,24 +234,15 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> +
> + 	/* Add a command descriptor for the cached data, if any */
> + 	if (cache_len) {
> +-		ctx->base.cache = kzalloc(cache_len, EIP197_GFP_FLAGS(*async));
> +-		if (!ctx->base.cache) {
> +-			ret = -ENOMEM;
> +-			goto unlock;
> +-		}
> +-		memcpy(ctx->base.cache, req->cache, cache_len);
> +-		ctx->base.cache_dma = dma_map_single(priv->dev, ctx->base.cache,
> +-						     cache_len, DMA_TO_DEVICE);
> +-		if (dma_mapping_error(priv->dev, ctx->base.cache_dma)) {
> +-			ret = -EINVAL;
> +-			goto free_cache;
> +-		}
> ++		req->cache_dma = dma_map_single(priv->dev, req->cache,
> ++						cache_len, DMA_TO_DEVICE);
> ++		if (dma_mapping_error(priv->dev, req->cache_dma))
> ++			return -EINVAL;
> +
> +-		ctx->base.cache_sz = cache_len;
> ++		req->cache_sz = cache_len;
> + 		first_cdesc = safexcel_add_cdesc(priv, ring, 1,
> + 						 (cache_len == len),
> +-						 ctx->base.cache_dma,
> +-						 cache_len, len,
> ++						 req->cache_dma, cache_len, len,
> + 						 ctx->base.ctxr_dma);
> + 		if (IS_ERR(first_cdesc)) {
> + 			ret = PTR_ERR(first_cdesc);
> +@@ -328,16 +326,12 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 	for (i = 0; i < n_cdesc; i++)
> + 		safexcel_ring_rollback_wptr(priv, &priv->ring[ring].cdr);
> + unmap_cache:
> +-	if (ctx->base.cache_dma) {
> +-		dma_unmap_single(priv->dev, ctx->base.cache_dma,
> +-				 ctx->base.cache_sz, DMA_TO_DEVICE);
> +-		ctx->base.cache_sz = 0;
> ++	if (req->cache_dma) {
> ++		dma_unmap_single(priv->dev, req->cache_dma, req->cache_sz,
> ++				 DMA_TO_DEVICE);
> ++		req->cache_sz = 0;
> + 	}
> +-free_cache:
> +-	kfree(ctx->base.cache);
> +-	ctx->base.cache = NULL;
> +
> +-unlock:
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> + 	return ret;
> + }
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/623-crypto-inside-secure-fix-missing-unlock-on-error-in-.patch b/target/linux/mvebu/patches-4.14/623-crypto-inside-secure-fix-missing-unlock-on-error-in-.patch
> new file mode 100644
> index 0000000000..8852f9b698
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/623-crypto-inside-secure-fix-missing-unlock-on-error-in-.patch
> @@ -0,0 +1,36 @@
> +From d26b54baf430da2d33288d62e0e3bff07e02c28d Mon Sep 17 00:00:00 2001
> +From: "weiyongjun \\(A\\)" <weiyongjun1 at huawei.com>
> +Date: Tue, 13 Mar 2018 14:54:03 +0000
> +Subject: [PATCH 24/36] crypto: inside-secure - fix missing unlock on error in
> + safexcel_ahash_send_req()
> +
> +Add the missing unlock before return from function
> +safexcel_ahash_send_req() in the error handling case.
> +
> +Fixes: cff9a17545a3 ("crypto: inside-secure - move cache result dma mapping to request")
> +Signed-off-by: Wei Yongjun <weiyongjun1 at huawei.com>
> +Acked-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 4 +++-
> + 1 file changed, 3 insertions(+), 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 9703a4063cfc..a7702da92b02 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -236,8 +236,10 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 	if (cache_len) {
> + 		req->cache_dma = dma_map_single(priv->dev, req->cache,
> + 						cache_len, DMA_TO_DEVICE);
> +-		if (dma_mapping_error(priv->dev, req->cache_dma))
> ++		if (dma_mapping_error(priv->dev, req->cache_dma)) {
> ++			spin_unlock_bh(&priv->ring[ring].egress_lock);
> + 			return -EINVAL;
> ++		}
> +
> + 		req->cache_sz = cache_len;
> + 		first_cdesc = safexcel_add_cdesc(priv, ring, 1,
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/624-crypto-inside-secure-improve-clock-initialization.patch b/target/linux/mvebu/patches-4.14/624-crypto-inside-secure-improve-clock-initialization.patch
> new file mode 100644
> index 0000000000..3ce5bc11dd
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/624-crypto-inside-secure-improve-clock-initialization.patch
> @@ -0,0 +1,48 @@
> +From b2da8a42670c569d084fbfb92ffde36cc7524f53 Mon Sep 17 00:00:00 2001
> +From: Gregory CLEMENT <gregory.clement at bootlin.com>
> +Date: Tue, 13 Mar 2018 17:48:41 +0100
> +Subject: [PATCH 25/36] crypto: inside-secure - improve clock initialization
> +
> +The clock is optional, but if it is present we should managed it. If
> +there is an error while trying getting it, we should exit and report this
> +error.
> +
> +So instead of returning an error only in the -EPROBE case, turn it in an
> +other way and ignore the clock only if it is not present (-ENOENT case).
> +
> +Signed-off-by: Gregory CLEMENT <gregory.clement at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 11 ++++++-----
> + 1 file changed, 6 insertions(+), 5 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 09adeaa0da6b..cbcb5d9f17bd 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -882,16 +882,17 @@ static int safexcel_probe(struct platform_device *pdev)
> + 	}
> +
> + 	priv->clk = devm_clk_get(&pdev->dev, NULL);
> +-	if (!IS_ERR(priv->clk)) {
> ++	ret = PTR_ERR_OR_ZERO(priv->clk);
> ++	/* The clock isn't mandatory */
> ++	if  (ret != -ENOENT) {
> ++		if (ret)
> ++			return ret;
> ++
> + 		ret = clk_prepare_enable(priv->clk);
> + 		if (ret) {
> + 			dev_err(dev, "unable to enable clk (%d)\n", ret);
> + 			return ret;
> + 		}
> +-	} else {
> +-		/* The clock isn't mandatory */
> +-		if (PTR_ERR(priv->clk) == -EPROBE_DEFER)
> +-			return -EPROBE_DEFER;
> + 	}
> +
> + 	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/625-crypto-inside-secure-fix-clock-resource-by-adding-a-.patch b/target/linux/mvebu/patches-4.14/625-crypto-inside-secure-fix-clock-resource-by-adding-a-.patch
> new file mode 100644
> index 0000000000..3366950758
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/625-crypto-inside-secure-fix-clock-resource-by-adding-a-.patch
> @@ -0,0 +1,146 @@
> +From 9f9f1c49dcce9bef968bcef522f78048255a7416 Mon Sep 17 00:00:00 2001
> +From: Gregory CLEMENT <gregory.clement at bootlin.com>
> +Date: Tue, 13 Mar 2018 17:48:42 +0100
> +Subject: [PATCH 26/36] crypto: inside-secure - fix clock resource by adding a
> + register clock
> +
> +On Armada 7K/8K we need to explicitly enable the register clock. This
> +clock is optional because not all the SoCs using this IP need it but at
> +least for Armada 7K/8K it is actually mandatory.
> +
> +The binding documentation is updated accordingly.
> +
> +Signed-off-by: Gregory CLEMENT <gregory.clement at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + .../bindings/crypto/inside-secure-safexcel.txt     |  6 +++-
> + drivers/crypto/inside-secure/safexcel.c            | 34 ++++++++++++++++------
> + drivers/crypto/inside-secure/safexcel.h            |  1 +
> + 3 files changed, 31 insertions(+), 10 deletions(-)
> +
> +diff --git a/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt b/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
> +index fbc07d12322f..5a3d0829ddf2 100644
> +--- a/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
> ++++ b/Documentation/devicetree/bindings/crypto/inside-secure-safexcel.txt
> +@@ -7,7 +7,11 @@ Required properties:
> + - interrupt-names: Should be "ring0", "ring1", "ring2", "ring3", "eip", "mem".
> +
> + Optional properties:
> +-- clocks: Reference to the crypto engine clock.
> ++- clocks: Reference to the crypto engine clocks, the second clock is
> ++          needed for the Armada 7K/8K SoCs.
> ++- clock-names: mandatory if there is a second clock, in this case the
> ++               name must be "core" for the first clock and "reg" for
> ++               the second one.
> +
> + Example:
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index cbcb5d9f17bd..2f68b4ed5500 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -895,16 +895,30 @@ static int safexcel_probe(struct platform_device *pdev)
> + 		}
> + 	}
> +
> ++	priv->reg_clk = devm_clk_get(&pdev->dev, "reg");
> ++	ret = PTR_ERR_OR_ZERO(priv->reg_clk);
> ++	/* The clock isn't mandatory */
> ++	if  (ret != -ENOENT) {
> ++		if (ret)
> ++			goto err_core_clk;
> ++
> ++		ret = clk_prepare_enable(priv->reg_clk);
> ++		if (ret) {
> ++			dev_err(dev, "unable to enable reg clk (%d)\n", ret);
> ++			goto err_core_clk;
> ++		}
> ++	}
> ++
> + 	ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64));
> + 	if (ret)
> +-		goto err_clk;
> ++		goto err_reg_clk;
> +
> + 	priv->context_pool = dmam_pool_create("safexcel-context", dev,
> + 					      sizeof(struct safexcel_context_record),
> + 					      1, 0);
> + 	if (!priv->context_pool) {
> + 		ret = -ENOMEM;
> +-		goto err_clk;
> ++		goto err_reg_clk;
> + 	}
> +
> + 	safexcel_configure(priv);
> +@@ -919,12 +933,12 @@ static int safexcel_probe(struct platform_device *pdev)
> + 						     &priv->ring[i].cdr,
> + 						     &priv->ring[i].rdr);
> + 		if (ret)
> +-			goto err_clk;
> ++			goto err_reg_clk;
> +
> + 		ring_irq = devm_kzalloc(dev, sizeof(*ring_irq), GFP_KERNEL);
> + 		if (!ring_irq) {
> + 			ret = -ENOMEM;
> +-			goto err_clk;
> ++			goto err_reg_clk;
> + 		}
> +
> + 		ring_irq->priv = priv;
> +@@ -936,7 +950,7 @@ static int safexcel_probe(struct platform_device *pdev)
> + 						ring_irq);
> + 		if (irq < 0) {
> + 			ret = irq;
> +-			goto err_clk;
> ++			goto err_reg_clk;
> + 		}
> +
> + 		priv->ring[i].work_data.priv = priv;
> +@@ -947,7 +961,7 @@ static int safexcel_probe(struct platform_device *pdev)
> + 		priv->ring[i].workqueue = create_singlethread_workqueue(wq_name);
> + 		if (!priv->ring[i].workqueue) {
> + 			ret = -ENOMEM;
> +-			goto err_clk;
> ++			goto err_reg_clk;
> + 		}
> +
> + 		priv->ring[i].requests = 0;
> +@@ -968,18 +982,20 @@ static int safexcel_probe(struct platform_device *pdev)
> + 	ret = safexcel_hw_init(priv);
> + 	if (ret) {
> + 		dev_err(dev, "EIP h/w init failed (%d)\n", ret);
> +-		goto err_clk;
> ++		goto err_reg_clk;
> + 	}
> +
> + 	ret = safexcel_register_algorithms(priv);
> + 	if (ret) {
> + 		dev_err(dev, "Failed to register algorithms (%d)\n", ret);
> +-		goto err_clk;
> ++		goto err_reg_clk;
> + 	}
> +
> + 	return 0;
> +
> +-err_clk:
> ++err_reg_clk:
> ++	clk_disable_unprepare(priv->reg_clk);
> ++err_core_clk:
> + 	clk_disable_unprepare(priv->clk);
> + 	return ret;
> + }
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index d8dff65fc311..4efeb0251daf 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -525,6 +525,7 @@ struct safexcel_crypto_priv {
> + 	void __iomem *base;
> + 	struct device *dev;
> + 	struct clk *clk;
> ++	struct clk *reg_clk;
> + 	struct safexcel_config config;
> +
> + 	enum safexcel_eip_version version;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/626-crypto-inside-secure-move-the-digest-to-the-request-.patch b/target/linux/mvebu/patches-4.14/626-crypto-inside-secure-move-the-digest-to-the-request-.patch
> new file mode 100644
> index 0000000000..535e51f5f7
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/626-crypto-inside-secure-move-the-digest-to-the-request-.patch
> @@ -0,0 +1,161 @@
> +From ba9692358a5cba4770b5230d31e71944f517a06c Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:13 +0100
> +Subject: [PATCH 27/36] crypto: inside-secure - move the digest to the request
> + context
> +
> +This patches moves the digest information from the transformation
> +context to the request context. This fixes cases where HMAC init
> +functions were called and override the digest value for a short period
> +of time, as the HMAC init functions call the SHA init one which reset
> +the value. This lead to a small percentage of HMAC being incorrectly
> +computed under heavy load.
> +
> +Fixes: 1b44c5a60c13 ("crypto: inside-secure - add SafeXcel EIP197 crypto engine driver")
> +Suggested-by: Ofer Heifetz <oferh at marvell.com>
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +[Ofer here did all the work, from seeing the issue to understanding the
> +root cause. I only made the patch.]
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 30 +++++++++++++++++-----------
> + 1 file changed, 18 insertions(+), 12 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index a7702da92b02..cfcae5e51b9d 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -21,7 +21,6 @@ struct safexcel_ahash_ctx {
> + 	struct safexcel_crypto_priv *priv;
> +
> + 	u32 alg;
> +-	u32 digest;
> +
> + 	u32 ipad[SHA1_DIGEST_SIZE / sizeof(u32)];
> + 	u32 opad[SHA1_DIGEST_SIZE / sizeof(u32)];
> +@@ -36,6 +35,8 @@ struct safexcel_ahash_req {
> + 	int nents;
> + 	dma_addr_t result_dma;
> +
> ++	u32 digest;
> ++
> + 	u8 state_sz;    /* expected sate size, only set once */
> + 	u32 state[SHA256_DIGEST_SIZE / sizeof(u32)] __aligned(sizeof(u32));
> +
> +@@ -53,6 +54,8 @@ struct safexcel_ahash_export_state {
> + 	u64 len;
> + 	u64 processed;
> +
> ++	u32 digest;
> ++
> + 	u32 state[SHA256_DIGEST_SIZE / sizeof(u32)];
> + 	u8 cache[SHA256_BLOCK_SIZE];
> + };
> +@@ -86,9 +89,9 @@ static void safexcel_context_control(struct safexcel_ahash_ctx *ctx,
> +
> + 	cdesc->control_data.control0 |= CONTEXT_CONTROL_TYPE_HASH_OUT;
> + 	cdesc->control_data.control0 |= ctx->alg;
> +-	cdesc->control_data.control0 |= ctx->digest;
> ++	cdesc->control_data.control0 |= req->digest;
> +
> +-	if (ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) {
> ++	if (req->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED) {
> + 		if (req->processed) {
> + 			if (ctx->alg == CONTEXT_CONTROL_CRYPTO_ALG_SHA1)
> + 				cdesc->control_data.control0 |= CONTEXT_CONTROL_SIZE(6);
> +@@ -116,7 +119,7 @@ static void safexcel_context_control(struct safexcel_ahash_ctx *ctx,
> + 			if (req->finish)
> + 				ctx->base.ctxr->data[i] = cpu_to_le32(req->processed / blocksize);
> + 		}
> +-	} else if (ctx->digest == CONTEXT_CONTROL_DIGEST_HMAC) {
> ++	} else if (req->digest == CONTEXT_CONTROL_DIGEST_HMAC) {
> + 		cdesc->control_data.control0 |= CONTEXT_CONTROL_SIZE(10);
> +
> + 		memcpy(ctx->base.ctxr->data, ctx->ipad, digestsize);
> +@@ -545,7 +548,7 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq)
> + 	if (ctx->base.ctxr) {
> + 		if (priv->version == EIP197 &&
> + 		    !ctx->base.needs_inv && req->processed &&
> +-		    ctx->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED)
> ++		    req->digest == CONTEXT_CONTROL_DIGEST_PRECOMPUTED)
> + 			/* We're still setting needs_inv here, even though it is
> + 			 * cleared right away, because the needs_inv flag can be
> + 			 * set in other functions and we want to keep the same
> +@@ -580,7 +583,6 @@ static int safexcel_ahash_enqueue(struct ahash_request *areq)
> +
> + static int safexcel_ahash_update(struct ahash_request *areq)
> + {
> +-	struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq));
> + 	struct safexcel_ahash_req *req = ahash_request_ctx(areq);
> + 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq);
> +
> +@@ -596,7 +598,7 @@ static int safexcel_ahash_update(struct ahash_request *areq)
> + 	 * We're not doing partial updates when performing an hmac request.
> + 	 * Everything will be handled by the final() call.
> + 	 */
> +-	if (ctx->digest == CONTEXT_CONTROL_DIGEST_HMAC)
> ++	if (req->digest == CONTEXT_CONTROL_DIGEST_HMAC)
> + 		return 0;
> +
> + 	if (req->hmac)
> +@@ -655,6 +657,8 @@ static int safexcel_ahash_export(struct ahash_request *areq, void *out)
> + 	export->len = req->len;
> + 	export->processed = req->processed;
> +
> ++	export->digest = req->digest;
> ++
> + 	memcpy(export->state, req->state, req->state_sz);
> + 	memcpy(export->cache, req->cache, crypto_ahash_blocksize(ahash));
> +
> +@@ -675,6 +679,8 @@ static int safexcel_ahash_import(struct ahash_request *areq, const void *in)
> + 	req->len = export->len;
> + 	req->processed = export->processed;
> +
> ++	req->digest = export->digest;
> ++
> + 	memcpy(req->cache, export->cache, crypto_ahash_blocksize(ahash));
> + 	memcpy(req->state, export->state, req->state_sz);
> +
> +@@ -711,7 +717,7 @@ static int safexcel_sha1_init(struct ahash_request *areq)
> + 	req->state[4] = SHA1_H4;
> +
> + 	ctx->alg = CONTEXT_CONTROL_CRYPTO_ALG_SHA1;
> +-	ctx->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED;
> ++	req->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED;
> + 	req->state_sz = SHA1_DIGEST_SIZE;
> +
> + 	return 0;
> +@@ -778,10 +784,10 @@ struct safexcel_alg_template safexcel_alg_sha1 = {
> +
> + static int safexcel_hmac_sha1_init(struct ahash_request *areq)
> + {
> +-	struct safexcel_ahash_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(areq));
> ++	struct safexcel_ahash_req *req = ahash_request_ctx(areq);
> +
> + 	safexcel_sha1_init(areq);
> +-	ctx->digest = CONTEXT_CONTROL_DIGEST_HMAC;
> ++	req->digest = CONTEXT_CONTROL_DIGEST_HMAC;
> + 	return 0;
> + }
> +
> +@@ -1019,7 +1025,7 @@ static int safexcel_sha256_init(struct ahash_request *areq)
> + 	req->state[7] = SHA256_H7;
> +
> + 	ctx->alg = CONTEXT_CONTROL_CRYPTO_ALG_SHA256;
> +-	ctx->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED;
> ++	req->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED;
> + 	req->state_sz = SHA256_DIGEST_SIZE;
> +
> + 	return 0;
> +@@ -1081,7 +1087,7 @@ static int safexcel_sha224_init(struct ahash_request *areq)
> + 	req->state[7] = SHA224_H7;
> +
> + 	ctx->alg = CONTEXT_CONTROL_CRYPTO_ALG_SHA224;
> +-	ctx->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED;
> ++	req->digest = CONTEXT_CONTROL_DIGEST_PRECOMPUTED;
> + 	req->state_sz = SHA256_DIGEST_SIZE;
> +
> + 	return 0;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/627-crypto-inside-secure-fix-typo-s-allways-always-in-a-.patch b/target/linux/mvebu/patches-4.14/627-crypto-inside-secure-fix-typo-s-allways-always-in-a-.patch
> new file mode 100644
> index 0000000000..578f5045f6
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/627-crypto-inside-secure-fix-typo-s-allways-always-in-a-.patch
> @@ -0,0 +1,45 @@
> +From 3648bce52f9a3f432a12c2bd0453019a90341660 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:14 +0100
> +Subject: [PATCH 28/36] crypto: inside-secure - fix typo s/allways/always/ in a
> + define
> +
> +Small cosmetic patch fixing one typo in the
> +EIP197_HIA_DSE_CFG_ALLWAYS_BUFFERABLE macro, it should be _ALWAYS_.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 2 +-
> + drivers/crypto/inside-secure/safexcel.h | 2 +-
> + 2 files changed, 2 insertions(+), 2 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 2f68b4ed5500..cc9d2e9126b4 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -332,7 +332,7 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	val = EIP197_HIA_DSE_CFG_DIS_DEBUG;
> + 	val |= EIP197_HIA_DxE_CFG_MIN_DATA_SIZE(7) | EIP197_HIA_DxE_CFG_MAX_DATA_SIZE(8);
> + 	val |= EIP197_HIA_DxE_CFG_DATA_CACHE_CTRL(WR_CACHE_3BITS);
> +-	val |= EIP197_HIA_DSE_CFG_ALLWAYS_BUFFERABLE;
> ++	val |= EIP197_HIA_DSE_CFG_ALWAYS_BUFFERABLE;
> + 	/* FIXME: instability issues can occur for EIP97 but disabling it impact
> + 	 * performances.
> + 	 */
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 4efeb0251daf..9ca1654136e0 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -179,7 +179,7 @@
> + #define EIP197_HIA_DxE_CFG_MIN_DATA_SIZE(n)	((n) << 0)
> + #define EIP197_HIA_DxE_CFG_DATA_CACHE_CTRL(n)	(((n) & 0x7) << 4)
> + #define EIP197_HIA_DxE_CFG_MAX_DATA_SIZE(n)	((n) << 8)
> +-#define EIP197_HIA_DSE_CFG_ALLWAYS_BUFFERABLE	GENMASK(15, 14)
> ++#define EIP197_HIA_DSE_CFG_ALWAYS_BUFFERABLE	GENMASK(15, 14)
> + #define EIP197_HIA_DxE_CFG_MIN_CTRL_SIZE(n)	((n) << 16)
> + #define EIP197_HIA_DxE_CFG_CTRL_CACHE_CTRL(n)	(((n) & 0x7) << 20)
> + #define EIP197_HIA_DxE_CFG_MAX_CTRL_SIZE(n)	((n) << 24)
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/628-crypto-inside-secure-fix-a-typo-in-a-register-name.patch b/target/linux/mvebu/patches-4.14/628-crypto-inside-secure-fix-a-typo-in-a-register-name.patch
> new file mode 100644
> index 0000000000..ae2b51d052
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/628-crypto-inside-secure-fix-a-typo-in-a-register-name.patch
> @@ -0,0 +1,45 @@
> +From 5883f9a3c4d207518860ee05ec3c8ba6ffb2454c Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:15 +0100
> +Subject: [PATCH 29/36] crypto: inside-secure - fix a typo in a register name
> +
> +This patch fixes a typo in the EIP197_HIA_xDR_WR_CTRL_BUG register name,
> +as it should be EIP197_HIA_xDR_WR_CTRL_BUF. This is a cosmetic only
> +change.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c | 2 +-
> + drivers/crypto/inside-secure/safexcel.h | 2 +-
> + 2 files changed, 2 insertions(+), 2 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index cc9d2e9126b4..f7d7293de699 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -235,7 +235,7 @@ static int safexcel_hw_setup_rdesc_rings(struct safexcel_crypto_priv *priv)
> + 		/* Configure DMA tx control */
> + 		val = EIP197_HIA_xDR_CFG_WR_CACHE(WR_CACHE_3BITS);
> + 		val |= EIP197_HIA_xDR_CFG_RD_CACHE(RD_CACHE_3BITS);
> +-		val |= EIP197_HIA_xDR_WR_RES_BUF | EIP197_HIA_xDR_WR_CTRL_BUG;
> ++		val |= EIP197_HIA_xDR_WR_RES_BUF | EIP197_HIA_xDR_WR_CTRL_BUF;
> + 		writel(val,
> + 		       EIP197_HIA_RDR(priv, i) + EIP197_HIA_xDR_DMA_CFG);
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 9ca1654136e0..295813920618 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -135,7 +135,7 @@
> +
> + /* EIP197_HIA_xDR_DMA_CFG */
> + #define EIP197_HIA_xDR_WR_RES_BUF		BIT(22)
> +-#define EIP197_HIA_xDR_WR_CTRL_BUG		BIT(23)
> ++#define EIP197_HIA_xDR_WR_CTRL_BUF		BIT(23)
> + #define EIP197_HIA_xDR_WR_OWN_BUF		BIT(24)
> + #define EIP197_HIA_xDR_CFG_WR_CACHE(n)		(((n) & 0x7) << 25)
> + #define EIP197_HIA_xDR_CFG_RD_CACHE(n)		(((n) & 0x7) << 29)
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/629-crypto-inside-secure-improve-the-send-error-path.patch b/target/linux/mvebu/patches-4.14/629-crypto-inside-secure-improve-the-send-error-path.patch
> new file mode 100644
> index 0000000000..a5d8fae1c4
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/629-crypto-inside-secure-improve-the-send-error-path.patch
> @@ -0,0 +1,50 @@
> +From 7a3daac97ede7897225c206389230bbcc265e834 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:16 +0100
> +Subject: [PATCH 30/36] crypto: inside-secure - improve the send error path
> +
> +This patch improves the send error path as it wasn't handling all error
> +cases. A new label is added, and some of the goto are updated to point
> +to the right labels, so that the code is more robust to errors.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 7 +++++--
> + 1 file changed, 5 insertions(+), 2 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index cfcae5e51b9d..4ff3f7615b3d 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -281,7 +281,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 					   sglen, len, ctx->base.ctxr_dma);
> + 		if (IS_ERR(cdesc)) {
> + 			ret = PTR_ERR(cdesc);
> +-			goto cdesc_rollback;
> ++			goto unmap_sg;
> + 		}
> + 		n_cdesc++;
> +
> +@@ -305,7 +305,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 					 DMA_FROM_DEVICE);
> + 	if (dma_mapping_error(priv->dev, req->result_dma)) {
> + 		ret = -EINVAL;
> +-		goto cdesc_rollback;
> ++		goto unmap_sg;
> + 	}
> +
> + 	/* Add a result descriptor */
> +@@ -326,6 +326,9 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
> + 	return 0;
> +
> + unmap_result:
> ++	dma_unmap_single(priv->dev, req->result_dma, req->state_sz,
> ++			 DMA_FROM_DEVICE);
> ++unmap_sg:
> + 	dma_unmap_sg(priv->dev, areq->src, req->nents, DMA_TO_DEVICE);
> + cdesc_rollback:
> + 	for (i = 0; i < n_cdesc; i++)
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/630-crypto-inside-secure-do-not-access-buffers-mapped-to.patch b/target/linux/mvebu/patches-4.14/630-crypto-inside-secure-do-not-access-buffers-mapped-to.patch
> new file mode 100644
> index 0000000000..1f04c8a0dc
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/630-crypto-inside-secure-do-not-access-buffers-mapped-to.patch
> @@ -0,0 +1,46 @@
> +From 89099c389a04fb6d049cdad85cbcd5d5447fb1d1 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:17 +0100
> +Subject: [PATCH 31/36] crypto: inside-secure - do not access buffers mapped to
> + the device
> +
> +This patches update the way the digest is copied from the state buffer
> +to the result buffer, so that the copy only happen after the state
> +buffer was DMA unmapped, as otherwise the buffer would be owned by the
> +device.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 8 ++++----
> + 1 file changed, 4 insertions(+), 4 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 4ff3f7615b3d..573b12e4d9dd 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -156,10 +156,6 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
> + 	safexcel_complete(priv, ring);
> + 	spin_unlock_bh(&priv->ring[ring].egress_lock);
> +
> +-	if (sreq->finish)
> +-		memcpy(areq->result, sreq->state,
> +-		       crypto_ahash_digestsize(ahash));
> +-
> + 	if (sreq->nents) {
> + 		dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
> + 		sreq->nents = 0;
> +@@ -177,6 +173,10 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
> + 		sreq->cache_dma = 0;
> + 	}
> +
> ++	if (sreq->finish)
> ++		memcpy(areq->result, sreq->state,
> ++		       crypto_ahash_digestsize(ahash));
> ++
> + 	cache_len = sreq->len - sreq->processed;
> + 	if (cache_len)
> + 		memcpy(sreq->cache, sreq->cache_next, cache_len);
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/631-crypto-inside-secure-improve-the-skcipher-token.patch b/target/linux/mvebu/patches-4.14/631-crypto-inside-secure-improve-the-skcipher-token.patch
> new file mode 100644
> index 0000000000..7d25a6255e
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/631-crypto-inside-secure-improve-the-skcipher-token.patch
> @@ -0,0 +1,36 @@
> +From 65c1ad099b6fab06cf04e1a0d797f490aa02719b Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:18 +0100
> +Subject: [PATCH 32/36] crypto: inside-secure - improve the skcipher token
> +
> +The token used for encryption and decryption of skcipher algorithms sets
> +its stat field to "last packet". As it's a cipher only algorithm, there
> +is not hash operation and thus the "last hash" bit should be set to tell
> +the internal engine no hash operation should be performed.
> +
> +This does not fix a bug, but improves the token definition to follow
> +exactly what's advised by the datasheet.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_cipher.c | 3 ++-
> + 1 file changed, 2 insertions(+), 1 deletion(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
> +index 17a7725a6f6d..bafb60505fab 100644
> +--- a/drivers/crypto/inside-secure/safexcel_cipher.c
> ++++ b/drivers/crypto/inside-secure/safexcel_cipher.c
> +@@ -58,7 +58,8 @@ static void safexcel_cipher_token(struct safexcel_cipher_ctx *ctx,
> +
> + 	token[0].opcode = EIP197_TOKEN_OPCODE_DIRECTION;
> + 	token[0].packet_length = length;
> +-	token[0].stat = EIP197_TOKEN_STAT_LAST_PACKET;
> ++	token[0].stat = EIP197_TOKEN_STAT_LAST_PACKET |
> ++			EIP197_TOKEN_STAT_LAST_HASH;
> + 	token[0].instructions = EIP197_TOKEN_INS_LAST |
> + 				EIP197_TOKEN_INS_TYPE_CRYTO |
> + 				EIP197_TOKEN_INS_TYPE_OUTPUT;
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/632-crypto-inside-secure-the-context-ipad-opad-should-us.patch b/target/linux/mvebu/patches-4.14/632-crypto-inside-secure-the-context-ipad-opad-should-us.patch
> new file mode 100644
> index 0000000000..00ca242006
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/632-crypto-inside-secure-the-context-ipad-opad-should-us.patch
> @@ -0,0 +1,42 @@
> +From f8fcb222ce5f865703d89bdc9ebc4b30e6e32110 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:19 +0100
> +Subject: [PATCH 33/36] crypto: inside-secure - the context ipad/opad should
> + use the state sz
> +
> +This patches uses the state size of the algorithms instead of their
> +digest size to copy the ipad and opad in the context. This doesn't fix
> +anything as the state and digest size are the same for many algorithms,
> +and for all the hmac currently supported by this driver. However
> +hmac(sha224) use the sha224 hash function which has a different digest
> +and state size. This commit prepares the addition of such algorithms.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel_hash.c | 8 ++++----
> + 1 file changed, 4 insertions(+), 4 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 573b12e4d9dd..d152f2eb0271 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -120,11 +120,11 @@ static void safexcel_context_control(struct safexcel_ahash_ctx *ctx,
> + 				ctx->base.ctxr->data[i] = cpu_to_le32(req->processed / blocksize);
> + 		}
> + 	} else if (req->digest == CONTEXT_CONTROL_DIGEST_HMAC) {
> +-		cdesc->control_data.control0 |= CONTEXT_CONTROL_SIZE(10);
> ++		cdesc->control_data.control0 |= CONTEXT_CONTROL_SIZE(2 * req->state_sz / sizeof(u32));
> +
> +-		memcpy(ctx->base.ctxr->data, ctx->ipad, digestsize);
> +-		memcpy(ctx->base.ctxr->data + digestsize / sizeof(u32),
> +-		       ctx->opad, digestsize);
> ++		memcpy(ctx->base.ctxr->data, ctx->ipad, req->state_sz);
> ++		memcpy(ctx->base.ctxr->data + req->state_sz / sizeof(u32),
> ++		       ctx->opad, req->state_sz);
> + 	}
> + }
> +
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/633-crypto-inside-secure-hmac-sha256-support.patch b/target/linux/mvebu/patches-4.14/633-crypto-inside-secure-hmac-sha256-support.patch
> new file mode 100644
> index 0000000000..a95b19532c
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/633-crypto-inside-secure-hmac-sha256-support.patch
> @@ -0,0 +1,174 @@
> +From 97eb1ecf57d126f52e1974fb0f9e2c17dc3ea0a0 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:20 +0100
> +Subject: [PATCH 34/36] crypto: inside-secure - hmac(sha256) support
> +
> +This patch adds the hmac(sha256) support to the Inside Secure
> +cryptographic engine driver.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c      |  3 +-
> + drivers/crypto/inside-secure/safexcel.h      |  1 +
> + drivers/crypto/inside-secure/safexcel_hash.c | 80 +++++++++++++++++++++++++---
> + 3 files changed, 75 insertions(+), 9 deletions(-)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index f7d7293de699..33595f41586f 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -354,7 +354,7 @@ static int safexcel_hw_init(struct safexcel_crypto_priv *priv)
> + 	val |= EIP197_PROTOCOL_ENCRYPT_ONLY | EIP197_PROTOCOL_HASH_ONLY;
> + 	val |= EIP197_ALG_AES_ECB | EIP197_ALG_AES_CBC;
> + 	val |= EIP197_ALG_SHA1 | EIP197_ALG_HMAC_SHA1;
> +-	val |= EIP197_ALG_SHA2;
> ++	val |= EIP197_ALG_SHA2 | EIP197_ALG_HMAC_SHA2;
> + 	writel(val, EIP197_PE(priv) + EIP197_PE_EIP96_FUNCTION_EN);
> +
> + 	/* Command Descriptor Rings prepare */
> +@@ -768,6 +768,7 @@ static struct safexcel_alg_template *safexcel_algs[] = {
> + 	&safexcel_alg_sha224,
> + 	&safexcel_alg_sha256,
> + 	&safexcel_alg_hmac_sha1,
> ++	&safexcel_alg_hmac_sha256,
> + };
> +
> + static int safexcel_register_algorithms(struct safexcel_crypto_priv *priv)
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 295813920618..99e0f32452ff 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -633,5 +633,6 @@ extern struct safexcel_alg_template safexcel_alg_sha1;
> + extern struct safexcel_alg_template safexcel_alg_sha224;
> + extern struct safexcel_alg_template safexcel_alg_sha256;
> + extern struct safexcel_alg_template safexcel_alg_hmac_sha1;
> ++extern struct safexcel_alg_template safexcel_alg_hmac_sha256;
> +
> + #endif
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index d152f2eb0271..2917a902596d 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -22,8 +22,8 @@ struct safexcel_ahash_ctx {
> +
> + 	u32 alg;
> +
> +-	u32 ipad[SHA1_DIGEST_SIZE / sizeof(u32)];
> +-	u32 opad[SHA1_DIGEST_SIZE / sizeof(u32)];
> ++	u32 ipad[SHA256_DIGEST_SIZE / sizeof(u32)];
> ++	u32 opad[SHA256_DIGEST_SIZE / sizeof(u32)];
> + };
> +
> + struct safexcel_ahash_req {
> +@@ -953,20 +953,21 @@ static int safexcel_hmac_setkey(const char *alg, const u8 *key,
> + 	return ret;
> + }
> +
> +-static int safexcel_hmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
> +-				     unsigned int keylen)
> ++static int safexcel_hmac_alg_setkey(struct crypto_ahash *tfm, const u8 *key,
> ++				    unsigned int keylen, const char *alg,
> ++				    unsigned int state_sz)
> + {
> + 	struct safexcel_ahash_ctx *ctx = crypto_tfm_ctx(crypto_ahash_tfm(tfm));
> + 	struct safexcel_crypto_priv *priv = ctx->priv;
> + 	struct safexcel_ahash_export_state istate, ostate;
> + 	int ret, i;
> +
> +-	ret = safexcel_hmac_setkey("safexcel-sha1", key, keylen, &istate, &ostate);
> ++	ret = safexcel_hmac_setkey(alg, key, keylen, &istate, &ostate);
> + 	if (ret)
> + 		return ret;
> +
> + 	if (priv->version == EIP197 && ctx->base.ctxr) {
> +-		for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(u32); i++) {
> ++		for (i = 0; i < state_sz / sizeof(u32); i++) {
> + 			if (ctx->ipad[i] != le32_to_cpu(istate.state[i]) ||
> + 			    ctx->opad[i] != le32_to_cpu(ostate.state[i])) {
> + 				ctx->base.needs_inv = true;
> +@@ -975,12 +976,19 @@ static int safexcel_hmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
> + 		}
> + 	}
> +
> +-	memcpy(ctx->ipad, &istate.state, SHA1_DIGEST_SIZE);
> +-	memcpy(ctx->opad, &ostate.state, SHA1_DIGEST_SIZE);
> ++	memcpy(ctx->ipad, &istate.state, state_sz);
> ++	memcpy(ctx->opad, &ostate.state, state_sz);
> +
> + 	return 0;
> + }
> +
> ++static int safexcel_hmac_sha1_setkey(struct crypto_ahash *tfm, const u8 *key,
> ++				     unsigned int keylen)
> ++{
> ++	return safexcel_hmac_alg_setkey(tfm, key, keylen, "safexcel-sha1",
> ++					SHA1_DIGEST_SIZE);
> ++}
> ++
> + struct safexcel_alg_template safexcel_alg_hmac_sha1 = {
> + 	.type = SAFEXCEL_ALG_TYPE_AHASH,
> + 	.alg.ahash = {
> +@@ -1134,3 +1142,59 @@ struct safexcel_alg_template safexcel_alg_sha224 = {
> + 		},
> + 	},
> + };
> ++
> ++static int safexcel_hmac_sha256_setkey(struct crypto_ahash *tfm, const u8 *key,
> ++				     unsigned int keylen)
> ++{
> ++	return safexcel_hmac_alg_setkey(tfm, key, keylen, "safexcel-sha256",
> ++					SHA256_DIGEST_SIZE);
> ++}
> ++
> ++static int safexcel_hmac_sha256_init(struct ahash_request *areq)
> ++{
> ++	struct safexcel_ahash_req *req = ahash_request_ctx(areq);
> ++
> ++	safexcel_sha256_init(areq);
> ++	req->digest = CONTEXT_CONTROL_DIGEST_HMAC;
> ++	return 0;
> ++}
> ++
> ++static int safexcel_hmac_sha256_digest(struct ahash_request *areq)
> ++{
> ++	int ret = safexcel_hmac_sha256_init(areq);
> ++
> ++	if (ret)
> ++		return ret;
> ++
> ++	return safexcel_ahash_finup(areq);
> ++}
> ++
> ++struct safexcel_alg_template safexcel_alg_hmac_sha256 = {
> ++	.type = SAFEXCEL_ALG_TYPE_AHASH,
> ++	.alg.ahash = {
> ++		.init = safexcel_hmac_sha256_init,
> ++		.update = safexcel_ahash_update,
> ++		.final = safexcel_ahash_final,
> ++		.finup = safexcel_ahash_finup,
> ++		.digest = safexcel_hmac_sha256_digest,
> ++		.setkey = safexcel_hmac_sha256_setkey,
> ++		.export = safexcel_ahash_export,
> ++		.import = safexcel_ahash_import,
> ++		.halg = {
> ++			.digestsize = SHA256_DIGEST_SIZE,
> ++			.statesize = sizeof(struct safexcel_ahash_export_state),
> ++			.base = {
> ++				.cra_name = "hmac(sha256)",
> ++				.cra_driver_name = "safexcel-hmac-sha256",
> ++				.cra_priority = 300,
> ++				.cra_flags = CRYPTO_ALG_ASYNC |
> ++					     CRYPTO_ALG_KERN_DRIVER_ONLY,
> ++				.cra_blocksize = SHA256_BLOCK_SIZE,
> ++				.cra_ctxsize = sizeof(struct safexcel_ahash_ctx),
> ++				.cra_init = safexcel_ahash_cra_init,
> ++				.cra_exit = safexcel_ahash_cra_exit,
> ++				.cra_module = THIS_MODULE,
> ++			},
> ++		},
> ++	},
> ++};
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/634-crypto-inside-secure-hmac-sha224-support.patch b/target/linux/mvebu/patches-4.14/634-crypto-inside-secure-hmac-sha224-support.patch
> new file mode 100644
> index 0000000000..23ed568f9d
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/634-crypto-inside-secure-hmac-sha224-support.patch
> @@ -0,0 +1,110 @@
> +From 3fcdf8383e1154e89056636a57124401df552e29 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at bootlin.com>
> +Date: Mon, 19 Mar 2018 09:21:21 +0100
> +Subject: [PATCH 35/36] crypto: inside-secure - hmac(sha224) support
> +
> +This patch adds the hmac(sha224) support to the Inside Secure
> +cryptographic engine driver.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at bootlin.com>
> +Signed-off-by: Herbert Xu <herbert at gondor.apana.org.au>
> +---
> + drivers/crypto/inside-secure/safexcel.c      |  1 +
> + drivers/crypto/inside-secure/safexcel.h      |  1 +
> + drivers/crypto/inside-secure/safexcel_hash.c | 56 ++++++++++++++++++++++++++++
> + 3 files changed, 58 insertions(+)
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
> +index 33595f41586f..d4a81be0d7d2 100644
> +--- a/drivers/crypto/inside-secure/safexcel.c
> ++++ b/drivers/crypto/inside-secure/safexcel.c
> +@@ -768,6 +768,7 @@ static struct safexcel_alg_template *safexcel_algs[] = {
> + 	&safexcel_alg_sha224,
> + 	&safexcel_alg_sha256,
> + 	&safexcel_alg_hmac_sha1,
> ++	&safexcel_alg_hmac_sha224,
> + 	&safexcel_alg_hmac_sha256,
> + };
> +
> +diff --git a/drivers/crypto/inside-secure/safexcel.h b/drivers/crypto/inside-secure/safexcel.h
> +index 99e0f32452ff..b470a849721f 100644
> +--- a/drivers/crypto/inside-secure/safexcel.h
> ++++ b/drivers/crypto/inside-secure/safexcel.h
> +@@ -633,6 +633,7 @@ extern struct safexcel_alg_template safexcel_alg_sha1;
> + extern struct safexcel_alg_template safexcel_alg_sha224;
> + extern struct safexcel_alg_template safexcel_alg_sha256;
> + extern struct safexcel_alg_template safexcel_alg_hmac_sha1;
> ++extern struct safexcel_alg_template safexcel_alg_hmac_sha224;
> + extern struct safexcel_alg_template safexcel_alg_hmac_sha256;
> +
> + #endif
> +diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
> +index 2917a902596d..d9ddf776c799 100644
> +--- a/drivers/crypto/inside-secure/safexcel_hash.c
> ++++ b/drivers/crypto/inside-secure/safexcel_hash.c
> +@@ -1143,6 +1143,62 @@ struct safexcel_alg_template safexcel_alg_sha224 = {
> + 	},
> + };
> +
> ++static int safexcel_hmac_sha224_setkey(struct crypto_ahash *tfm, const u8 *key,
> ++				       unsigned int keylen)
> ++{
> ++	return safexcel_hmac_alg_setkey(tfm, key, keylen, "safexcel-sha224",
> ++					SHA256_DIGEST_SIZE);
> ++}
> ++
> ++static int safexcel_hmac_sha224_init(struct ahash_request *areq)
> ++{
> ++	struct safexcel_ahash_req *req = ahash_request_ctx(areq);
> ++
> ++	safexcel_sha224_init(areq);
> ++	req->digest = CONTEXT_CONTROL_DIGEST_HMAC;
> ++	return 0;
> ++}
> ++
> ++static int safexcel_hmac_sha224_digest(struct ahash_request *areq)
> ++{
> ++	int ret = safexcel_hmac_sha224_init(areq);
> ++
> ++	if (ret)
> ++		return ret;
> ++
> ++	return safexcel_ahash_finup(areq);
> ++}
> ++
> ++struct safexcel_alg_template safexcel_alg_hmac_sha224 = {
> ++	.type = SAFEXCEL_ALG_TYPE_AHASH,
> ++	.alg.ahash = {
> ++		.init = safexcel_hmac_sha224_init,
> ++		.update = safexcel_ahash_update,
> ++		.final = safexcel_ahash_final,
> ++		.finup = safexcel_ahash_finup,
> ++		.digest = safexcel_hmac_sha224_digest,
> ++		.setkey = safexcel_hmac_sha224_setkey,
> ++		.export = safexcel_ahash_export,
> ++		.import = safexcel_ahash_import,
> ++		.halg = {
> ++			.digestsize = SHA224_DIGEST_SIZE,
> ++			.statesize = sizeof(struct safexcel_ahash_export_state),
> ++			.base = {
> ++				.cra_name = "hmac(sha224)",
> ++				.cra_driver_name = "safexcel-hmac-sha224",
> ++				.cra_priority = 300,
> ++				.cra_flags = CRYPTO_ALG_ASYNC |
> ++					     CRYPTO_ALG_KERN_DRIVER_ONLY,
> ++				.cra_blocksize = SHA224_BLOCK_SIZE,
> ++				.cra_ctxsize = sizeof(struct safexcel_ahash_ctx),
> ++				.cra_init = safexcel_ahash_cra_init,
> ++				.cra_exit = safexcel_ahash_cra_exit,
> ++				.cra_module = THIS_MODULE,
> ++			},
> ++		},
> ++	},
> ++};
> ++
> + static int safexcel_hmac_sha256_setkey(struct crypto_ahash *tfm, const u8 *key,
> + 				     unsigned int keylen)
> + {
> +--
> +2.16.4
> +
> diff --git a/target/linux/mvebu/patches-4.14/635-arm64-dts-marvell-armada-37xx-add-a-crypto-node.patch b/target/linux/mvebu/patches-4.14/635-arm64-dts-marvell-armada-37xx-add-a-crypto-node.patch
> new file mode 100644
> index 0000000000..da41aa2a02
> --- /dev/null
> +++ b/target/linux/mvebu/patches-4.14/635-arm64-dts-marvell-armada-37xx-add-a-crypto-node.patch
> @@ -0,0 +1,42 @@
> +From d2c023f2d1c7158ea315853a7b1e175c75641ca6 Mon Sep 17 00:00:00 2001
> +From: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Date: Tue, 26 Dec 2017 17:16:53 +0100
> +Subject: [PATCH 36/36] arm64: dts: marvell: armada-37xx: add a crypto node
> +
> +This patch adds a crypto node describing the EIP97 engine found in
> +Armada 37xx SoCs. The cryptographic engine is enabled by default.
> +
> +Signed-off-by: Antoine Tenart <antoine.tenart at free-electrons.com>
> +Signed-off-by: Gregory CLEMENT <gregory.clement at free-electrons.com>
> +---
> + arch/arm64/boot/dts/marvell/armada-37xx.dtsi | 14 ++++++++++++++
> + 1 file changed, 14 insertions(+)
> +
> +diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
> +index a5fe6c60cd0a..8cd43ce38571 100644
> +--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
> ++++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
> +@@ -302,6 +302,20 @@
> + 				};
> + 			};
> +
> ++			crypto: crypto at 90000 {
> ++				compatible = "inside-secure,safexcel-eip97";
> ++				reg = <0x90000 0x20000>;
> ++				interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>,
> ++					     <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>,
> ++					     <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>,
> ++					     <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>,
> ++					     <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>,
> ++					     <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>;
> ++				interrupt-names = "mem", "ring0", "ring1",
> ++						  "ring2", "ring3", "eip";
> ++				clocks = <&nb_periph_clk 15>;
> ++			};
> ++
> + 			sdhci1: sdhci at d0000 {
> + 				compatible = "marvell,armada-3700-sdhci",
> + 					     "marvell,sdhci-xenon";
> +--
> +2.16.4
> +


_______________________________________________
openwrt-devel mailing list
openwrt-devel at lists.openwrt.org
https://lists.openwrt.org/mailman/listinfo/openwrt-devel


More information about the openwrt-devel mailing list