[{"@context":"http:\/\/schema.org\/","@type":"BlogPosting","@id":"https:\/\/wiki.edu.vn\/en\/wiki24\/2016\/06\/28\/t-distributed-stochastic-neighbor-embedding-wikipedia\/#BlogPosting","mainEntityOfPage":"https:\/\/wiki.edu.vn\/en\/wiki24\/2016\/06\/28\/t-distributed-stochastic-neighbor-embedding-wikipedia\/","headline":"t-distributed stochastic neighbor embedding – Wikipedia","name":"t-distributed stochastic neighbor embedding – Wikipedia","description":"Technique for dimensionality reduction T-SNE visualisation of word embeddings generated using 19th century literature T-SNE embeddings of MNIST dataset t-distributed","datePublished":"2016-06-28","dateModified":"2016-06-28","author":{"@type":"Person","@id":"https:\/\/wiki.edu.vn\/en\/wiki24\/author\/lordneo\/#Person","name":"lordneo","url":"https:\/\/wiki.edu.vn\/en\/wiki24\/author\/lordneo\/","image":{"@type":"ImageObject","@id":"https:\/\/secure.gravatar.com\/avatar\/c9645c498c9701c88b89b8537773dd7c?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c9645c498c9701c88b89b8537773dd7c?s=96&d=mm&r=g","height":96,"width":96}},"publisher":{"@type":"Organization","name":"Enzyklop\u00e4die","logo":{"@type":"ImageObject","@id":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","url":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","width":600,"height":60}},"image":{"@type":"ImageObject","@id":"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/thumb\/9\/94\/T-SNE_visualisation_of_word_embeddings_generated_using_19th_century_literature.png\/220px-T-SNE_visualisation_of_word_embeddings_generated_using_19th_century_literature.png","url":"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/thumb\/9\/94\/T-SNE_visualisation_of_word_embeddings_generated_using_19th_century_literature.png\/220px-T-SNE_visualisation_of_word_embeddings_generated_using_19th_century_literature.png","height":"149","width":"220"},"url":"https:\/\/wiki.edu.vn\/en\/wiki24\/2016\/06\/28\/t-distributed-stochastic-neighbor-embedding-wikipedia\/","about":["Wiki"],"wordCount":9723,"articleBody":"Technique for dimensionality reduction T-SNE visualisation of word embeddings generated using 19th century literature T-SNE embeddings of MNIST dataset t-distributed stochastic neighbor embedding (t-SNE) is a statistical method for visualizing high-dimensional data by giving each datapoint a location in a two or three-dimensional map. It is based on Stochastic Neighbor Embedding originally developed by Sam Roweis and Geoffrey Hinton,[1] where Laurens van der Maaten proposed the t-distributed variant.[2] It is a nonlinear dimensionality reduction technique for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. The t-SNE algorithm comprises two main stages. First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback\u2013Leibler divergence (KL divergence) between the two distributions with respect to the locations of the points in the map. While the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this can be changed as appropriate.t-SNE has been used for visualization in a wide range of applications, including genomics, computer security research,[3]natural language processing, music analysis,[4]cancer research,[5]bioinformatics,[6] geological domain interpretation,[7][8][9] and biomedical signal processing.[10]While t-SNE plots often seem to display clusters, the visual clusters can be influenced strongly by the chosen parameterization and therefore a good understanding of the parameters for t-SNE is necessary. Such “clusters” can be shown to even appear in non-clustered data,[11] and thus may be false findings. Interactive exploration may thus be necessary to choose parameters and validate results.[12][13] It has been demonstrated that t-SNE is often able to recover well-separated clusters, and with special parameter choices, approximates a simple form of spectral clustering.[14] Table of ContentsDetails[edit]Software[edit]References[edit]External links[edit]Details[edit]Given a set of N{displaystyle N} high-dimensional objects x1,\u2026,xN{displaystyle mathbf {x} _{1},dots ,mathbf {x} _{N}}, t-SNE first computes probabilities pij{displaystyle p_{ij}} that are proportional to the similarity of objects xi{displaystyle mathbf {x} _{i}} and xj{displaystyle mathbf {x} _{j}}, as follows.For i\u2260j{displaystyle ineq j}, definepj\u2223i=exp\u2061(\u2212\u2016xi\u2212xj\u20162\/2\u03c3i2)\u2211k\u2260iexp\u2061(\u2212\u2016xi\u2212xk\u20162\/2\u03c3i2){displaystyle p_{jmid i}={frac {exp(-lVert mathbf {x} _{i}-mathbf {x} _{j}rVert ^{2}\/2sigma _{i}^{2})}{sum _{kneq i}exp(-lVert mathbf {x} _{i}-mathbf {x} _{k}rVert ^{2}\/2sigma _{i}^{2})}}}and set pi\u2223i=0{displaystyle p_{imid i}=0}.Note that \u2211jpj\u2223i=1{displaystyle sum _{j}p_{jmid i}=1} for all i{displaystyle i}.As Van der Maaten and Hinton explained: “The similarity of datapoint xj{displaystyle x_{j}} to datapoint xi{displaystyle x_{i}} is the conditional probability, pj|i{displaystyle p_{j|i}}, that xi{displaystyle x_{i}} would pick xj{displaystyle x_{j}} as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at xi{displaystyle x_{i}}.”[2]Now definepij=pj\u2223i+pi\u2223j2N{displaystyle p_{ij}={frac {p_{jmid i}+p_{imid j}}{2N}}}This is motivated because pi{displaystyle p_{i}} and pj{displaystyle p_{j}} from the N samples are estimated as 1\/N, so the conditional probability can be written as pi\u2223j=Npij{displaystyle p_{imid j}=Np_{ij}} and pj\u2223i=Npji{displaystyle p_{jmid i}=Np_{ji}} . Since pij=pji{displaystyle p_{ij}=p_{ji}}, you can obtain previous formula.Also note that pii=0{displaystyle p_{ii}=0} and \u2211i,jpij=1{displaystyle sum _{i,j}p_{ij}=1}.The bandwidth of the Gaussian kernels \u03c3i{displaystyle sigma _{i}} is set in such a way that the entropy of the conditional distribution equals a predefined entropy using the bisection method. As a result, the bandwidth is adapted to the density of the data: smaller values of \u03c3i{displaystyle sigma _{i}} are used in denser parts of the data space.Since the Gaussian kernel uses the Euclidean distance \u2016xi\u2212xj\u2016{displaystyle lVert x_{i}-x_{j}rVert }, it is affected by the curse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, the pij{displaystyle p_{ij}} become too similar (asymptotically, they would converge to a constant). It has been proposed to adjust the distances with a power transform, based on the intrinsic dimension of each point, to alleviate this.[15]t-SNE aims to learn a d{displaystyle d}-dimensional map y1,\u2026,yN{displaystyle mathbf {y} _{1},dots ,mathbf {y} _{N}} (with yi\u2208Rd{displaystyle mathbf {y} _{i}in mathbb {R} ^{d}} and d{displaystyle d} typically chosen as 2 or 3) that reflects the similarities pij{displaystyle p_{ij}} as well as possible. To this end, it measures similarities qij{displaystyle q_{ij}} between two points in the map yi{displaystyle mathbf {y} _{i}} and yj{displaystyle mathbf {y} _{j}}, using a very similar approach.Specifically, for i\u2260j{displaystyle ineq j}, define qij{displaystyle q_{ij}} asqij=(1+\u2016yi\u2212yj\u20162)\u22121\u2211k\u2211l\u2260k(1+\u2016yk\u2212yl\u20162)\u22121{displaystyle q_{ij}={frac {(1+lVert mathbf {y} _{i}-mathbf {y} _{j}rVert ^{2})^{-1}}{sum _{k}sum _{lneq k}(1+lVert mathbf {y} _{k}-mathbf {y} _{l}rVert ^{2})^{-1}}}}and set qii=0{displaystyle q_{ii}=0}.Herein a heavy-tailed Student t-distribution (with one-degree of freedom, which is the same as a Cauchy distribution) is used to measure similarities between low-dimensional points in order to allow dissimilar objects to be modeled far apart in the map.The locations of the points yi{displaystyle mathbf {y} _{i}} in the map are determined by minimizing the (non-symmetric) Kullback\u2013Leibler divergence of the distribution P{displaystyle P} from the distribution Q{displaystyle Q}, that is:KL(P\u2225Q)=\u2211i\u2260jpijlog\u2061pijqij{displaystyle mathrm {KL} left(Pparallel Qright)=sum _{ineq j}p_{ij}log {frac {p_{ij}}{q_{ij}}}}The minimization of the Kullback\u2013Leibler divergence with respect to the points yi{displaystyle mathbf {y} _{i}} is performed using gradient descent.The result of this optimization is a map that reflects the similarities between the high-dimensional inputs.Software[edit]The R package Rtsne implements t-SNE in R.ELKI contains tSNE, also with Barnes-Hut approximationscikit-learn, a popular machine learning library in Python implements t-SNE with both exact solutions and the Barnes-Hut approximation.Tensorboard, the visualization kit associated with TensorFlow, also implements t-SNE (online version)References[edit]^ Roweis, Sam; Hinton, Geoffrey (January 2002). Stochastic neighbor embedding (PDF). Neural Information Processing Systems.^ a b van der Maaten, L.J.P.; Hinton, G.E. (Nov 2008). “Visualizing Data Using t-SNE” (PDF). Journal of Machine Learning Research. 9: 2579\u20132605.^ Gashi, I.; Stankovic, V.; Leita, C.; Thonnard, O. (2009). “An Experimental Study of Diversity with Off-the-shelf AntiVirus Engines”. Proceedings of the IEEE International Symposium on Network Computing and Applications: 4\u201311.^ Hamel, P.; Eck, D. (2010). “Learning Features from Music Audio with Deep Belief Networks”. Proceedings of the International Society for Music Information Retrieval Conference: 339\u2013344.^ Jamieson, A.R.; Giger, M.L.; Drukker, K.; Lui, H.; Yuan, Y.; Bhooshan, N. (2010). “Exploring Nonlinear Feature Space Dimension Reduction and Data Representation in Breast CADx with Laplacian Eigenmaps and t-SNE”. Medical Physics. 37 (1): 339\u2013351. doi:10.1118\/1.3267037. PMC\u00a02807447. PMID\u00a020175497.^ Wallach, I.; Liliean, R. (2009). “The Protein-Small-Molecule Database, A Non-Redundant Structural Resource for the Analysis of Protein-Ligand Binding”. Bioinformatics. 25 (5): 615\u2013620. doi:10.1093\/bioinformatics\/btp035. PMID\u00a019153135.^ Balamurali, Mehala; Silversides, Katherine L.; Melkumyan, Arman (2019-04-01). “A comparison of t-SNE, SOM and SPADE for identifying material type domains in geological data”. Computers & Geosciences. 125: 78\u201389. doi:10.1016\/j.cageo.2019.01.011. ISSN\u00a00098-3004. S2CID\u00a067926902.^ Balamurali, Mehala; Melkumyan, Arman (2016). Hirose, Akira; Ozawa, Seiichi; Doya, Kenji; Ikeda, Kazushi; Lee, Minho; Liu, Derong (eds.). “t-SNE Based Visualisation and Clustering of Geological Domain”. Neural Information Processing. Lecture Notes in Computer Science. Cham: Springer International Publishing. 9950: 565\u2013572. doi:10.1007\/978-3-319-46681-1_67. ISBN\u00a0978-3-319-46681-1.^ Leung, Raymond; Balamurali, Mehala; Melkumyan, Arman (2021-01-01). “Sample Truncation Strategies for Outlier Removal in Geochemical Data: The MCD Robust Distance Approach Versus t-SNE Ensemble Clustering”. Mathematical Geosciences. 53 (1): 105\u2013130. doi:10.1007\/s11004-019-09839-z. ISSN\u00a01874-8953. S2CID\u00a0208329378.^ Birjandtalab, J.; Pouyan, M. B.; Nourani, M. (2016-02-01). Nonlinear dimension reduction for EEG-based epileptic seizure detection. 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). pp.\u00a0595\u2013598. doi:10.1109\/BHI.2016.7455968. ISBN\u00a0978-1-5090-2455-1. S2CID\u00a08074617.^ “K-means clustering on the output of t-SNE”. Cross Validated. Retrieved 2018-04-16.^ Pezzotti, Nicola; Lelieveldt, Boudewijn P. F.; Maaten, Laurens van der; Hollt, Thomas; Eisemann, Elmar; Vilanova, Anna (2017-07-01). “Approximated and User Steerable tSNE for Progressive Visual Analytics”. IEEE Transactions on Visualization and Computer Graphics. 23 (7): 1739\u20131752. arXiv:1512.01655. doi:10.1109\/tvcg.2016.2570755. ISSN\u00a01077-2626. PMID\u00a028113434. S2CID\u00a0353336.^ Wattenberg, Martin; Vi\u00e9gas, Fernanda; Johnson, Ian (2016-10-13). “How to Use t-SNE Effectively”. Distill. Retrieved 4 December 2017.^ Linderman, George C.; Steinerberger, Stefan (2017-06-08). “Clustering with t-SNE, provably”. arXiv:1706.02582 [cs.LG].^ Schubert, Erich; Gertz, Michael (2017-10-04). Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection. SISAP 2017 \u2013 10th International Conference on Similarity Search and Applications. pp.\u00a0188\u2013203. doi:10.1007\/978-3-319-68474-1_13.External links[edit]"},{"@context":"http:\/\/schema.org\/","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki24\/#breadcrumbitem","name":"Enzyklop\u00e4die"}},{"@type":"ListItem","position":2,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki24\/2016\/06\/28\/t-distributed-stochastic-neighbor-embedding-wikipedia\/#breadcrumbitem","name":"t-distributed stochastic neighbor embedding – Wikipedia"}}]}]