DESIGNS CODES AND CRYPTOGRAPHY

Construction of LDPC convolutional codes via difference triangle sets
Alfarano GN, Lieb J and Rosenthal J
In this paper, a construction of LDPC convolutional codes over arbitrary finite fields, which generalizes the work of Robinson and Bernstein and the later work of Tong is provided. The sets of integers forming a (, )-(weak) difference triangle set are used as supports of some columns of the sliding parity-check matrix of an convolutional code, where , . The parameters of the convolutional code are related to the parameters of the underlying difference triangle set. In particular, a relation between the free distance of the code and is established as well as a relation between the degree of the code and the scope of the difference triangle set. Moreover, we show that some conditions on the weak difference triangle set ensure that the Tanner graph associated to the sliding parity-check matrix of the convolutional code is free from -cycles not satisfying the full rank condition over any finite field. Finally, we relax these conditions and provide a lower bound on the field size, depending on the parity of , that is sufficient to still avoid -cycles. This is important for improving the performance of a code and avoiding the presence of low-weight codewords and absorbing sets.
Functional encryption for set intersection in the multi-client setting
Lee K and Seo M
Functional encryption for set intersection (FE-SI) in the multi-client environment is that each client encrypts a set associated with time by using its own encryption key and uploads it to a cloud server, and then the cloud server which receives a function key of the client indexes ,  from a trusted center can compute the intersection of the two client ciphertexts. In this paper, we first newly define the concept of FE-SI suitable for the multi-client setting. Then, we propose an efficient FE-SI scheme in asymmetric bilinear groups and prove the static security of our scheme under newly introduced assumptions. In our FE-SI scheme, a ciphertext consists of group elements, a function key consists of a single group element, and the decryption algorithm has complexity where is the size of a set in the ciphertext. Next, we propose another FE-SI scheme with time-constrained keys that limits the ability of function keys to be valid only for a specified time period , and proves the static security of our scheme. Finally, we prove that the two assumptions hold in the general group model to provide confidence in the two newly introduced assumptions.
Moderate-density parity-check codes from projective bundles
Bariffi J, Mattheus S, Neri A and Rosenthal J
New constructions for moderate-density parity-check (MDPC) codes using finite geometry are proposed. We design a parity-check matrix for the main family of binary codes as the concatenation of two matrices: the incidence matrix between points and lines of the Desarguesian projective plane and the incidence matrix between points and ovals of a projective bundle. A projective bundle is a special collection of ovals which pairwise meet in a unique point. We determine the minimum distance and the dimension of these codes, and we show that they have a natural quasi-cyclic structure. We consider alternative constructions based on an incidence matrix of a Desarguesian projective plane and compare their error-correction performance with regards to a modification of Gallager's bit-flipping decoding algorithm. In this setting, our codes have the best possible error-correction performance after one round of bit-flipping decoding given the parameters of the code's parity-check matrix.
A bit-vector differential model for the modular addition by a constant and its applications to differential and impossible-differential cryptanalysis
Azimi SA, Ranea A, Salmasizadeh M, Mohajeri J, Aref MR and Rijmen V
ARX algorithms are a class of symmetric-key algorithms constructed by Addition, Rotation, and XOR. To evaluate the resistance of an ARX cipher against differential and impossible-differential cryptanalysis, the recent automated methods employ constraint satisfaction solvers to search for optimal characteristics or impossible differentials. The main difficulty in formulating this search is finding the differential models of the non-linear operations. While an efficient bit-vector differential model was obtained for the modular addition with two variable inputs, no differential model for the modular addition by a constant has been proposed so far, preventing ARX ciphers including this operation from being evaluated with automated methods. In this paper, we present the first bit-vector differential model for the -bit modular addition by a constant input. Our model contains basic bit-vector constraints and describes the binary logarithm of the differential probability. We describe an SMT-based automated method that includes our model to search for differential characteristics of ARX ciphers including constant additions. We also introduce a new automated method for obtaining impossible differentials where we do not search over a small pre-defined set of differences, such as low-weight differences, but let the SMT solver search through the space of differences. Moreover, we implement both methods in our open-source tool ArxPy to find characteristics and impossible differentials of ARX ciphers with constant additions in a fully automated way. As some examples, we provide related-key impossible differentials and differential characteristics of TEA, XTEA, HIGHT, LEA, SHACAL-1, and SHACAL-2, which achieve better results compared to previous works.
New nonbinary code bounds based on divisibility arguments
Polak SC
For  , let  be the maximum size of a code  with minimum distance at least . We give a divisibility argument resulting in the new upper bounds  , and  . These in turn imply the new upper bounds  ,  ,  and  . Furthermore, we prove that for  , there is a 1-1-correspondence between symmetric  -nets (which are certain designs) and codes  of size  with minimum distance at least  . We derive the new upper bounds  and  from these 'symmetric net' codes.
Knot theory and error-correcting codes
Kılıç AB, Nijsten A, Pellikaan R and Ravagnani A
This paper builds a novel bridge between algebraic coding theory and mathematical knot theory, with applications in both directions. We give methods to construct error-correcting codes starting from the colorings of a knot, describing through a series of results how the properties of the knot translate into code parameters. We show that knots can be used to obtain error-correcting codes with prescribed parameters and an efficient decoding algorithm.
External codes for multiple unicast networks via interference alignment
Kschischang FR, Manganiello F, Ravagnani A and Savary K
We introduce a formal framework to study the multiple unicast problem for a coded network in which the network code is linear over a finite field and fixed. We show that the problem corresponds to an interference alignment problem over a finite field. In this context, we establish an outer bound for the achievable rate region and provide examples of networks where the bound is sharp. We finally give evidence of the crucial role played by the field characteristic in the problem.
Uniqueness of codes using semidefinite programming
Brouwer AE and Polak SC
For  , let (, , ) denote the maximum size of a binary code of word length , minimum distance  and constant weight . Schrijver recently showed using semidefinite programming that , and the second author that  and  . Here we show uniqueness of the codes achieving these bounds. Let (, ) denote the maximum size of a binary code of word length  and minimum distance . Gijswijt et al. showed that  . We show that there are several nonisomorphic codes achieving this bound, and classify all such codes with all distances divisible by 4.
An algebraic approach to circulant column parity mixers
Subroto RC
Circulant Column Parity Mixers (CCPMs) are a particular type of linear maps, used as the mixing layer in permutation-based cryptographic primitives like Keccak- (SHA3) and Xoodoo. Although being successfully applied, not much is known regarding their algebraic properties. They are limited to invertibility of CCPMs, and that the set of invertible CCPMs forms a group. A possible explanation is due to the complexity of describing CCPMs in terms of linear algebra. In this paper, we introduce a new approach to studying CCPMs using module theory from commutative algebra. We show that many interesting algebraic properties can be deduced using this approach, and that known results regarding CCPMs resurface as trivial consequences of module theoretic concepts. We also show how this approach can be used to study the linear layer of Xoodoo, and other linear maps with a similar structure which we call DCD-compositions. Using this approach, we prove that every DCD-composition where the underlying vector space with the same dimension as that of Xoodoo has a low order. This provides a solid mathematical explanation for the low order of the linear layer of Xoodoo, which equals 32. We design a DCD-composition using this module-theoretic approach, but with a higher order using a different dimension.
CSI-Otter: isogeny-based (partially) blind signatures from the class group action with a twist
Katsumata S, Lai YF, LeGrow JT and Qin L
In this paper, we construct the first provably-secure isogeny-based (partially) blind signature scheme. While at a high level the scheme resembles the Schnorr blind signature, our work does not directly follow from that construction, since isogenies do not offer as rich an algebraic structure. Specifically, our protocol does not fit into the abstraction introduced by Hauck, Kiltz, and Loss (EUROCYRPT'19), which was used to generically construct Schnorr-like blind signatures based on modules such as classical groups and lattices. Consequently, our scheme is provably secure in the random oracle model (ROM) against poly-logarithmically-many concurrent sessions assuming the subexponential hardness of the group action inverse problem. In more detail, our blind signature exploits the of an elliptic curve in an essential way to endow isogenies with a strictly richer structure than abstract group actions (but still more restrictive than modules). The basic scheme has public key size 128 B and signature size 8 KB under the CSIDH-512 parameter sets-these are the smallest among all provably secure post-quantum secure blind signatures. Relying on a new variant of the group action inverse problem ( ), we can halve the signature size to 4 KB while increasing the public key size to 512 B. We provide preliminary cryptanalysis of and show that for certain parameter settings, it is essentially as secure as the standard . Finally, we show a novel way to turn our blind signature into a partially blind signature, where we deviate from prior methods since they require hashing into the set of public keys while hiding the corresponding secret key-constructing such a hash function in the isogeny setting remains an open problem.
Conjunctive hierarchical secret sharing by finite geometry
Gyarmati M, Ligeti P, Sziklai P and Takáts M
Secret sharing is a general method for distributing sensitive data among the participants of a system such that only a collection of predefined qualified coalitions can recover the secret data. One of the most widely used special cases is threshold secret sharing, where every subset of participants of size above a given number is qualified. In this short note, we propose a general construction for a generalized threshold scheme, called conjunctive hierarchical secret sharing, where the participants are divided into disjoint levels of hierarchy, and there are different thresholds for all levels, all of which must be satisfied by qualified sets. The construction is the first method for arbitrary parameters based on finite geometry arguments and yields an improvement in the size of the underlying finite field in contrast with the existing results using polynomials.
Guessing less and better: improved attacks on GIFT-64
Canale F and Naya-Plasencia M
GIFT-64 is a block cipher that has received a lot of attention from the community since its proposal in 2017. The attack on the highest number of rounds is a differential related-key attack on 26 rounds. We studied this attack, in particular with respect to some recent generic frameworks for improving key recovery, and we realised that this framework, combined with an efficient parallel key guessing of interesting subsets of the key and a consequent list merging applied to the partial solutions, can improve the complexity of the attack. We propose two different trade-offs, as a result of the improved key-recovery. We believe that the techniques are quite generic and that it is possible to apply them to improve other differential attacks.
New results on non-disjoint and classical strong external difference families
Huczynska S and Hume S
Classical strong external difference families (SEDFs) are much-studied combinatorial structures motivated by information security applications; it is conjectured that only one classical abelian SEDF exists with more than two sets. Recently, non-disjoint SEDFs were introduced; it was shown that families of these exist with arbitrarily many sets. We present constructions for both classical and non-disjoint SEDFs, which encompass all known non-cyclotomic examples for either type (plus many new examples) using a sequence-based framework. Moreover, we introduce a range of new external difference structures (allowing set-sizes to vary, and sets to be replaced by multisets) in both the classical and non-disjoint case, and show how these may be applied to various communications applications.
Low-rank parity-check codes over Galois rings
Renner J, Neri A and Puchinger S
Low-rank parity-check (LRPC) codes are rank-metric codes over finite fields, which have been proposed by Gaborit et al. (Proceedings of the workshop on coding and cryptography WCC, vol 2013, 2013) for cryptographic applications. Inspired by a recent adaption of Gabidulin codes to certain finite rings by Kamche et al. (IEEE Trans Inf Theory 65(12):7718-7735, 2019), we define and study LRPC codes over Galois rings-a wide class of finite commutative rings. We give a decoding algorithm similar to Gaborit et al.'s decoder, based on simple linear-algebraic operations. We derive an upper bound on the failure probability of the decoder, which is significantly more involved than in the case of finite fields. The bound depends only on the rank of an error, i.e., is independent of its free rank. Further, we analyze the complexity of the decoder. We obtain that there is a class of LRPC codes over a Galois ring that can decode roughly the same number of errors as a Gabidulin code with the same code parameters, but faster than the currently best decoder for Gabidulin codes. However, the price that one needs to pay is a small failure probability, which we can bound from above.
Bounds on data limits for all-to-all comparison from combinatorial designs
Hall J, Horsley D and Stinson DR
In situations where every item in a data set must be compared with every other item in the set, it may be desirable to store the data across a number of machines in such a way that any two data items are stored together on at least one machine. One way to evaluate the efficiency of such a distribution is by the largest fraction of the data it requires to be allocated to any one machine. The is a measure of the minimum of this value across all possible such distributions. In this paper we further the study of ATAC data limits. We begin by investigating the data limits achievable using various classes of combinatorial designs. In particular, we examine the cases of transversal designs and projective Hjelmslev planes. We then observe relationships between data limits and the previously studied combinatorial parameters of and . Finally, we prove a lower bound on the ATAC data limit that improves on one of Hall, Kelly and Tian, and examine the special cases where equality in this bound is possible.
Caps and progression-free sets in
Elsholtz C and Pach PP
We study progression-free sets in the abelian groups . Let denote the maximal size of a set that does not contain a proper arithmetic progression of length . We give lower bound constructions, which e.g. include that , when is even. When this is of order at least . Moreover, if the progression-free set satisfies a technical condition, which dominates the problem at least in low dimension, then holds. We present a number of new methods which cover lower bounds for several infinite families of parameters , , , which includes for example: . For we determine the exact values, when , e.g.  , and for we determine the exact values, when , e.g.  . With regard to affine caps, i.e. sets without 3 points on a line, the new methods asymptotically improve the known lower bounds, when and : in from to , and when from to . This last improvement modulo 5 appears to be the first asymptotic improvement of any cap in (, ), when over a tensor lifting from dimension 6 (see Edel, in Des Codes Crytogr 31:5-14, 2004).
Algebraic properties of the maps
Schoone J and Daemen J
The Boolean map defined by (where ) is used in various permutations that are part of cryptographic schemes, e.g., Keccak-f (the SHA-3-permutation), ASCON (the winner of the NIST Lightweight competition), Xoodoo, Rasta and Subterranean (2.0). In this paper, we study various algebraic properties of this map. We consider (through vectorial isomorphism) as a univariate polynomial. We show that it is a power function if and only if . We furthermore compute bounds on the sparsity and degree of these univariate polynomials, and the number of different univariate representations. Secondly, we compute the number of monomials of given degree in the inverse of (if it exists). This number coincides with binomial coefficients. Lastly, we consider as a polynomial map, to study whether the same rule ( ) gives a bijection on field extensions of . We show that this is not the case for extensions whose degree is divisible by two or three. Based on these results, we conjecture that this rule does not give a bijection on any extension field of .
Analysing and exploiting the Mantin biases in RC4
Bricout R, Murphy S, Paterson KG and van der Merwe T
We explore the use of the Mantin biases (Mantin, Eurocrypt 2005) to recover plaintexts from RC4-encrypted traffic. We provide a more fine-grained analysis of these biases than in Mantin's original work. We show that, in fact, the original analysis was incorrect in certain cases: the Mantin biases are sometimes non-existent, and sometimes stronger than originally predicted. We then show how to use these biases in a plaintext recovery attack. Our attack targets two unknown bytes of plaintext that are located close to sequences of known plaintext bytes, a situation that arises in practice when RC4 is used in, for example, TLS. We provide a statistical framework that enables us to make predictions about the performance of this attack and its variants. We then extend the attack using standard dynamic programming techniques to tackle the problem of recovering longer plaintexts, a setting of practical interest in recovering HTTP session cookies and user passwords that are protected by RC4 in TLS. We perform experiments showing that we can successfully recover 16-byte plaintexts with 80% success rate using ciphertexts, an improvement over previous attacks.
Large subsets of  without arithmetic progressions
Elsholtz C, Klahn B and Lipnik GF
For integers and , we study the problem of finding good lower bounds for the size of progression-free sets in . Let denote the maximal size of a subset of without arithmetic progressions of length  and let denote the least prime factor of . We construct explicit progression-free sets and obtain the following improved lower bounds for :If is odd and , then If is even, and , then Moreover, we give some further improved lower bounds on for primes and progression lengths .
Bounding basis reduction properties
Neumaier A
The paper describes improved analysis techniques for basis reduction that allow one to prove strong complexity bounds and reduced basis guarantees for traditional reduction algorithms and some of their variants. This is achieved by a careful exploitation of the linear equations and inequalities relating various bit sizes before and after one or more reduction steps.
Information-set decoding for convolutional codes
Gassner N, Lieb J, Mazumder A and Schaller M
In this paper, we present a framework for generic decoding of convolutional codes, which allows us to do cryptanalysis of code-based systems that use convolutional codes as public keys. We then apply this framework to information set decoding, study success probabilities and give tools to choose variables. Finally, we use this to attack two cryptosystems based on convolutional codes. In the case of Bolkema et al. (Variations of the McEliece cryptosystem. In: Algebraic geometry for coding theory and cryptography: IPAM, Los Angeles, CA, Feb 2016. Springer, Cham, pp 129-150, 2017. https://doi.org/10.1007/978-3-319-63931-4_5), our code recovered about 74% of errors in less than 10 h each, and in the case of Almeida et al. (Smaller keys for code-based cryptography: McEliece cryptosystems with convolutional encoders. CoRR abs/2104.06809, 2021. arXiv: https://arxiv.org/abs/2104.06809v1), we give experimental evidence that 80% of the errors can be recovered in times corresponding to about 70 bits of operational security, with some instances being significantly lower.