Learning with errors – Wikipedia

In cryptography, Learning with errors (LWE) is a mathematical problem that is widely used in cryptography to create secure encryption algorithms.[1] It is based on the idea of representing secret information as a set of equations with errors. In other words, LWE is a way to hide the value of a secret by introducing noise to it.[2] In more technical terms, it refers to the computational problem of inferring a linear

n{displaystyle n}

-ary function

f{displaystyle f}

over a finite ring from given samples

yi=f(xi){displaystyle y_{i}=f(mathbf {x} _{i})}

some of which may be erroneous. The LWE problem is conjectured to be hard to solve,[1] and thus to be useful in cryptography.

More precisely, the LWE problem is defined as follows. Let

Zq{displaystyle mathbb {Z} _{q}}

denote the ring of integers modulo

q{displaystyle q}

and let

Zqn{displaystyle mathbb {Z} _{q}^{n}}

denote the set of

n{displaystyle n}

-vectors over

Zq{displaystyle mathbb {Z} _{q}}

. There exists a certain unknown linear function

f:ZqnZq{displaystyle f:mathbb {Z} _{q}^{n}rightarrow mathbb {Z} _{q}}

, and the input to the LWE problem is a sample of pairs

(x,y){displaystyle (mathbf {x} ,y)}

, where

xZqn{displaystyle mathbf {x} in mathbb {Z} _{q}^{n}}

and

yZq{displaystyle yin mathbb {Z} _{q}}

, so that with high probability

y=f(x){displaystyle y=f(mathbf {x} )}

. Furthermore, the deviation from the equality is according to some known noise model. The problem calls for finding the function

f{displaystyle f}

, or some close approximation thereof, with high probability.

The LWE problem was introduced by Oded Regev in 2005[3] (who won the 2018 Gödel Prize for this work), it is a generalization of the parity learning problem. Regev showed that the LWE problem is as hard to solve as several worst-case lattice problems. Subsequently, the LWE problem has been used as a hardness assumption to create public-key cryptosystems,[3][4] such as the ring learning with errors key exchange by Peikert.[5]

Definition[edit]

Denote by

T=R/Z{displaystyle mathbb {T} =mathbb {R} /mathbb {Z} }

the additive group on reals modulo one.
Let

sZqn{displaystyle mathbf {s} in mathbb {Z} _{q}^{n}}

be a fixed vector.
Let

ϕ{displaystyle phi }

be a fixed probability distribution over

T{displaystyle mathbb {T} }

.
Denote by

As,ϕ{displaystyle A_{mathbf {s} ,phi }}

the distribution on

Zqn×T{displaystyle mathbb {Z} _{q}^{n}times mathbb {T} }

obtained as follows.

  1. Pick a vector
  2. Pick a number
  3. Evaluate
  4. Output the pair

The learning with errors problem

LWEq,ϕ{displaystyle mathrm {LWE} _{q,phi }}

is to find

sZqn{displaystyle mathbf {s} in mathbb {Z} _{q}^{n}}

, given access to polynomially many samples of choice from

As,ϕ{displaystyle A_{mathbf {s} ,phi }}

.

For every

α>0{displaystyle alpha >0}

Dα{displaystyle D_{alpha }}

the one-dimensional Gaussian with zero mean and variance

α2/(2π){displaystyle alpha ^{2}/(2pi )}

, that is, the density function is

Dα(x)=ρα(x)/α{displaystyle D_{alpha }(x)=rho _{alpha }(x)/alpha }

where

ρα(x)=eπ(|x|/α)2{displaystyle rho _{alpha }(x)=e^{-pi (|x|/alpha )^{2}}}

, and let

Ψα{displaystyle Psi _{alpha }}

be the distribution on

T{displaystyle mathbb {T} }

obtained by considering

Dα{displaystyle D_{alpha }}

modulo one. The version of LWE considered in most of the results would be

LWEq,Ψα{displaystyle mathrm {LWE} _{q,Psi _{alpha }}}

Decision version[edit]

The LWE problem described above is the search version of the problem. In the decision version (DLWE), the goal is to distinguish between noisy inner products and uniformly random samples from

Zqn×T{displaystyle mathbb {Z} _{q}^{n}times mathbb {T} }

(practically, some discretized version of it). Regev[3] showed that the decision and search versions are equivalent when

q{displaystyle q}

is a prime bounded by some polynomial in

n{displaystyle n}

.

Solving decision assuming search[edit]

Intuitively, if we have a procedure for the search problem, the decision version can be solved easily: just feed the input samples for the decision problem to the solver for the search problem. Denote the given samples by

{(ai,bi)}Zqn×T{displaystyle {(mathbf {a} _{i},mathbf {b} _{i})}subset mathbb {Z} _{q}^{n}times mathbb {T} }

. If the solver returns a candidate

s{displaystyle mathbf {s} }

, for all

i{displaystyle i}

, calculate

{ai,sbi}{displaystyle {langle mathbf {a} _{i},mathbf {s} rangle -mathbf {b} _{i}}}

. If the samples are from an LWE distribution, then the results of this calculation will be distributed according

χ{displaystyle chi }

, but if the samples are uniformly random, these quantities will be distributed uniformly as well.

Solving search assuming decision[edit]

For the other direction, given a solver for the decision problem, the search version can be solved as follows: Recover

s{displaystyle mathbf {s} }

one coordinate at a time. To obtain the first coordinate,

s1{displaystyle mathbf {s} _{1}}

, make a guess

kZq{displaystyle kin Z_{q}}

, and do the following. Choose a number

rZq{displaystyle rin mathbb {Z} _{q}}

uniformly at random. Transform the given samples

{(ai,bi)}Zqn×T{displaystyle {(mathbf {a} _{i},mathbf {b} _{i})}subset mathbb {Z} _{q}^{n}times mathbb {T} }

as follows. Calculate

{(ai+(r,0,,0),bi+(rk)/q)}{displaystyle {(mathbf {a} _{i}+(r,0,ldots ,0),mathbf {b} _{i}+(rk)/q)}}

. Send the transformed samples to the decision solver.

If the guess

k{displaystyle k}

was correct, the transformation takes the distribution

As,χ{displaystyle A_{mathbf {s} ,chi }}

to itself, and otherwise, since

q{displaystyle q}

is prime, it takes it to the uniform distribution. So, given a polynomial-time solver for the decision problem that errs with very small probability, since

q{displaystyle q}

is bounded by some polynomial in

n{displaystyle n}

, it only takes polynomial time to guess every possible value for

k{displaystyle k}

and use the solver to see which one is correct.

After obtaining

s1{displaystyle mathbf {s} _{1}}

, we follow an analogous procedure for each other coordinate

sj{displaystyle mathbf {s} _{j}}

. Namely, we transform our

bi{displaystyle mathbf {b} _{i}}

samples the same way, and transform our

ai{displaystyle mathbf {a} _{i}}

samples by calculating

ai+(0,,r,,0){displaystyle mathbf {a} _{i}+(0,ldots ,r,ldots ,0)}

, where the

r{displaystyle r}

is in the

jth{displaystyle j^{text{th}}}

coordinate.[3]

Peikert[4] showed that this reduction, with a small modification, works for any

q{displaystyle q}

that is a product of distinct, small (polynomial in

n{displaystyle n}

) primes. The main idea is if

q=q1q2qt{displaystyle q=q_{1}q_{2}cdots q_{t}}

, for each

q{displaystyle q_{ell }}

, guess and check to see if

sj{displaystyle mathbf {s} _{j}}

is congruent to

0modq{displaystyle 0mod q_{ell }}

, and then use the Chinese remainder theorem to recover

sj{displaystyle mathbf {s} _{j}}

.

Average case hardness[edit]

Regev[3] showed the random self-reducibility of the LWE and DLWE problems for arbitrary

q{displaystyle q}

and

χ{displaystyle chi }

. Given samples

{(ai,bi)}{displaystyle {(mathbf {a} _{i},mathbf {b} _{i})}}

from

As,χ{displaystyle A_{mathbf {s} ,chi }}

, it is easy to see that

{(ai,bi+ai,t)/q}{displaystyle {(mathbf {a} _{i},mathbf {b} _{i}+langle mathbf {a} _{i},mathbf {t} rangle )/q}}

are samples from

As+t,χ{displaystyle A_{mathbf {s} +mathbf {t} ,chi }}

.

So, suppose there was some set

SZqn{displaystyle {mathcal {S}}subset mathbb {Z} _{q}^{n}}

such that

|S|/|Zqn|=1/poly(n){displaystyle |{mathcal {S}}|/|mathbb {Z} _{q}^{n}|=1/operatorname {poly} (n)}

, and for distributions

As,χ{displaystyle A_{mathbf {s} ‘,chi }}

, with

sS{displaystyle mathbf {s} ‘leftarrow {mathcal {S}}}

, DLWE was easy.

Then there would be some distinguisher

A{displaystyle {mathcal {A}}}

, who, given samples

{(ai,bi)}{displaystyle {(mathbf {a} _{i},mathbf {b} _{i})}}

, could tell whether they were uniformly random or from

As,χ{displaystyle A_{mathbf {s} ‘,chi }}

. If we need to distinguish uniformly random samples from

As,χ{displaystyle A_{mathbf {s} ,chi }}

, where

s{displaystyle mathbf {s} }

is chosen uniformly at random from

Zqn{displaystyle mathbb {Z} _{q}^{n}}

, we could simply try different values

t{displaystyle mathbf {t} }

sampled uniformly at random from

Zqn{displaystyle mathbb {Z} _{q}^{n}}

, calculate

{(ai,bi+ai,t)/q}{displaystyle {(mathbf {a} _{i},mathbf {b} _{i}+langle mathbf {a} _{i},mathbf {t} rangle )/q}}

and feed these samples to

A{displaystyle {mathcal {A}}}

. Since

S{displaystyle {mathcal {S}}}

comprises a large fraction of

Zqn{displaystyle mathbb {Z} _{q}^{n}}

, with high probability, if we choose a polynomial number of values for

t{displaystyle mathbf {t} }

, we will find one such that

s+tS{displaystyle mathbf {s} +mathbf {t} in {mathcal {S}}}

, and

A{displaystyle {mathcal {A}}}

will successfully distinguish the samples.

Thus, no such

S{displaystyle {mathcal {S}}}

can exist, meaning LWE and DLWE are (up to a polynomial factor) as hard in the average case as they are in the worst case.

Hardness results[edit]

Regev’s result[edit]

For a n-dimensional lattice

L{displaystyle L}

, let smoothing parameter

ηε(L){displaystyle eta _{varepsilon }(L)}

denote the smallest

s{displaystyle s}

such that

ρ1/s(L{0})ε{displaystyle rho _{1/s}(L^{*}setminus {mathbf {0} })leq varepsilon }

where

L{displaystyle L^{*}}

is the dual of

L{displaystyle L}

and

ρα(x)=eπ(|x|/α)2{displaystyle rho _{alpha }(x)=e^{-pi (|x|/alpha )^{2}}}

is extended to sets by summing over function values at each element in the set. Let

DL,r{displaystyle D_{L,r}}

denote the discrete Gaussian distribution on

L{displaystyle L}

of width

r{displaystyle r}

for a lattice

L{displaystyle L}

and real

r>0{displaystyle r>0}

xL{displaystyle xin L}

is proportional to

ρr(x){displaystyle rho _{r}(x)}

.

The discrete Gaussian sampling problem(DGS) is defined as follows: An instance of

DGSϕ{displaystyle DGS_{phi }}

is given by an

n{displaystyle n}

-dimensional lattice

L{displaystyle L}

and a number

rϕ(L){displaystyle rgeq phi (L)}

. The goal is to output a sample from

DL,r{displaystyle D_{L,r}}

. Regev shows that there is a reduction from

GapSVP100nγ(n){displaystyle operatorname {GapSVP} _{100{sqrt {n}}gamma (n)}}

to

DGSnγ(n)/λ(L){displaystyle DGS_{{sqrt {n}}gamma (n)/lambda (L^{*})}}

for any function

γ(n)1{displaystyle gamma (n)geq 1}

.

Regev then shows that there exists an efficient quantum algorithm for

DGS2nηε(L)/α{displaystyle DGS_{{sqrt {2n}}eta _{varepsilon }(L)/alpha }}

given access to an oracle for

LWEq,Ψα{displaystyle mathrm {LWE} _{q,Psi _{alpha }}}

for integer

q{displaystyle q}

and

α(0,1){displaystyle alpha in (0,1)}

such that

αq>2n{displaystyle alpha q>2{sqrt {n}}}

q{displaystyle q}

, for creating a cryptosystem, the modulus

q{displaystyle q}

has to be polynomial in

n{displaystyle n}

.

Peikert’s result[edit]

Peikert proves[4] that there is a probabilistic polynomial time reduction from the

GapSVPζ,γ{displaystyle operatorname {GapSVP} _{zeta ,gamma }}

problem in the worst case to solving

LWEq,Ψα{displaystyle mathrm {LWE} _{q,Psi _{alpha }}}

using

poly(n){displaystyle operatorname {poly} (n)}

samples for parameters

α(0,1){displaystyle alpha in (0,1)}

,

γ(n)n/(αlogn){displaystyle gamma (n)geq n/(alpha {sqrt {log n}})}

,

ζ(n)γ(n){displaystyle zeta (n)geq gamma (n)}

and

q(ζ/n)ωlogn){displaystyle qgeq (zeta /{sqrt {n}})omega {sqrt {log n}})}

.

Use in cryptography[edit]

The LWE problem serves as a versatile problem used in construction of several[3][4][6][7] cryptosystems. In 2005, Regev[3] showed that the decision version of LWE is hard assuming quantum hardness of the lattice problems

GapSVPγ{displaystyle mathrm {GapSVP} _{gamma }}

(for

γ{displaystyle gamma }

as above) and

SIVPt{displaystyle mathrm {SIVP} _{t}}

with

t=O(n/α){displaystyle t=O(n/alpha )}

). In 2009, Peikert[4] proved a similar result assuming only the classical hardness of the related problem

GapSVPζ,γ{displaystyle mathrm {GapSVP} _{zeta ,gamma }}

. The disadvantage of Peikert’s result is that it bases itself on a non-standard version of an easier (when compared to SIVP) problem GapSVP.

Public-key cryptosystem[edit]

Regev[3] proposed a public-key cryptosystem based on the hardness of the LWE problem. The cryptosystem as well as the proof of security and correctness are completely classical. The system is characterized by

m,q{displaystyle m,q}

and a probability distribution

χ{displaystyle chi }

on

T{displaystyle mathbb {T} }

. The setting of the parameters used in proofs of correctness and security is

The cryptosystem is then defined by:

  • Private key: Private key is an
  • Public key: Choose
  • Encryption: The encryption of a bit

The proof of correctness follows from choice of parameters and some probability analysis. The proof of security is by reduction to the decision version of LWE: an algorithm for distinguishing between encryptions (with above parameters) of

0{displaystyle 0}

and

1{displaystyle 1}

can be used to distinguish between

As,χ{displaystyle A_{s,chi }}

and the uniform distribution over

Zqn×T{displaystyle mathbb {Z} _{q}^{n}times mathbb {T} }

CCA-secure cryptosystem[edit]

Peikert[4] proposed a system that is secure even against any chosen-ciphertext attack.

Key exchange[edit]

The idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper[8] appeared in 2012 after a provisional patent application was filed in 2012.

The security of the protocol is proven based on the hardness of solving the LWE problem. In 2014, Peikert presented a key-transport scheme[9] following the same basic idea of Ding’s, where the new idea of sending an additional 1-bit signal for rounding in Ding’s construction is also used. The “new hope” implementation[10] selected for Google’s post-quantum experiment,[11] uses Peikert’s scheme with variation in the error distribution.

See also[edit]

References[edit]

  1. ^ a b Regev, Oded (2009). “On lattices, learning with errors, random linear codes, and cryptography”. Journal of the ACM. 56 (6): 1–40. doi:10.1145/1568318.1568324. S2CID 207156623.
  2. ^ Lyubashevsky, Vadim; Peikert, Chris; Regev, Oded (November 2013). “On Ideal Lattices and Learning with Errors over Rings”. Journal of the ACM. 60 (6): 1–35. doi:10.1145/2535925. ISSN 0004-5411.
  3. ^ a b c d e f g h Oded Regev, “On lattices, learning with errors, random linear codes, and cryptography,” in Proceedings of the thirty-seventh annual ACM symposium on Theory of computing (Baltimore, MD, USA: ACM, 2005), 84–93, http://portal.acm.org/citation.cfm?id=1060590.1060603.
  4. ^ a b c d e f Chris Peikert, “Public-key cryptosystems from the worst-case shortest vector problem: extended abstract,” in Proceedings of the 41st annual ACM symposium on Theory of computing (Bethesda, MD, USA: ACM, 2009), 333–342, http://portal.acm.org/citation.cfm?id=1536414.1536461.
  5. ^ Peikert, Chris (2014-10-01). “Lattice Cryptography for the Internet”. In Mosca, Michele (ed.). Post-Quantum Cryptography. Lecture Notes in Computer Science. Vol. 8772. Springer International Publishing. pp. 197–219. CiteSeerX 10.1.1.800.4743. doi:10.1007/978-3-319-11659-4_12. ISBN 978-3-319-11658-7. S2CID 8123895.
  6. ^ Chris Peikert and Brent Waters, “Lossy trapdoor functions and their applications,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 187-196, http://portal.acm.org/citation.cfm?id=1374406.
  7. ^ Craig Gentry, Chris Peikert, and Vinod Vaikuntanathan, “Trapdoors for hard lattices and new cryptographic constructions,” in Proceedings of the 40th annual ACM symposium on Theory of computing (Victoria, British Columbia, Canada: ACM, 2008), 197-206, http://portal.acm.org/citation.cfm?id=1374407.
  8. ^ Lin, Jintai Ding, Xiang Xie, Xiaodong (2012-01-01). “A Simple Provably Secure Key Exchange Scheme Based on the Learning with Errors Problem”. Cryptology ePrint Archive.
  9. ^ Peikert, Chris (2014-01-01). “Lattice Cryptography for the Internet”. Cryptology ePrint Archive.
  10. ^ Alkim, Erdem; Ducas, Léo; Pöppelmann, Thomas; Schwabe, Peter (2015-01-01). “Post-quantum key exchange – a new hope”. Cryptology ePrint Archive.
  11. ^ “Experimenting with Post-Quantum Cryptography”. Google Online Security Blog. Retrieved 2017-02-08.