Technology dictionary

TECHNOLOGY DICTIONARY

Datapedia

Your essential glossary of Big Data
and Artificial Intelligence Terms.

A

The fraction of predictions that a classification model got right. In multi-class classification, accuracy is defined as follows: Accuracy=Correct Predictions/Total Number Of Examples. In binary classification, accuracy has the following definition: Accuracy= (True Positives +True Negatives)/Total Number Of Examples.

A series of repeatable steps for carrying out a certain type of task with data. As with data structures, people studying computer science learn about different algorithms and their suitability for various tasks.

In AI’s early days in the 1960s, researchers sought general principles of intelligence to implement, often using symbolic logic to automate reasoning. As the cost of computing resources dropped, the focus moved more toward statistical analysis of large amounts of data to drive decision making that gives the appearance of intelligence.

Read more

An evaluation metric that considers all possible classification thresholds. The ROC curve is the plot between sensitivity and (1- specificity). (1- specificity) is also known as the False Positive rate and sensitivity is also known as True Positive rate. The Area Under the ROC (Receiver operating characteristic) curve is the probability that a classifier will be more confident that a randomly chosen positive example is actually positive than that a randomly chosen negative example is positive.

B

Named after the eighteenth-century English statistician and Presbyterian minister Thomas Bayes, Bayes' Theorem is used to calculate conditional probability. Conditional probability is the probability of an event 'B' occurring given that a related event 'A' has already occurred (P (B|A)). For example, let us say a clinic wants to cure cancer of the patients visiting the clinic. A represents an event "Person has cancer", B represents an event "Person is a smoker".

The clinic wishes to calculate the proportion of smokers from the patients diagnosed with cancer. To do so use the Bayes’ Theorem (also known as Bayes’ rule) which is as follows: P (A|B) = (P (B|A) P (A)) / P (B). The proportion of diagnosed smokers is equal to the product of proportions of smokers who have been diagnosed and diagnosed people, divided by the proportion of smokers. This is useful for working with false positives. The theorem also makes it easier to update a probability based on new data, which makes it valuable in the many applications where data accumulates continuously.

Bayesian statistics is a mathematical procedure that applies probabilities to statistical problems. It provides people the tools to update their beliefs on the evidence of new data. It is based on the use of Bayesian probabilities to summarize evidence.

An intercept or offset from an origin. Bias (also known as the bias term) is referred to as b or w0 in machine learning models. For example, bias is the b in the following formula: y'=b+w1x1+w2x2+…wnxn. In machine learning, bias is a learner’s tendency to consistently learn the same wrong thing. Variance is the tendency to learn random things irrespective of the real signal... It is easy to avoid overfitting (variance) by falling into the opposite error of underfitting (bias).

In general, it refers to the ability to work with collections of data that had been impractical before because of their volume, velocity, and variety (“the three Vs, or the four Vs if we include veracity”). A key driver of this new ability has been easier distribution of storage and processing across networks of inexpensive commodity hardware using technology such as Hadoop instead of requiring larger, more powerful individual computers. But it’s not the amount of data that’s important, it’s how organizations use this large amount of data to generate insights. Companies use various tools, techniques and resources to make sense of this data to derive effective business strategies.

Read more

Binary variables are those variables, which can have only two unique values. For example, a variable "Smoking Habit" can contain only two values like "Yes" and "No".

This is a Python library that extends the capacities of Numpy and Pandas to distributed data and streamed data. One can use it to access data from a wide number of sources such as Bcolz, MongoDB, SQLAlchemy, Apache Spark, PyTables, etc.

One can generate attractive, interactive, 3D graphics, and web applications with this Python library. It is particularly useful for applications with “live” data (in streaming).

Business analytics is mainly used to show the practical methodology than an organization uses to extract insights from their data. The methodology focuses on statistical analysis of the data.

Business intelligence refers to a set of strategies, applications, data and technologies used by an organization for data collection, analysis and generating insights in order to derive strategic business opportunities.

C

Categorical variables (or nominal variables) are those variables with discrete qualitative values. For example, names of cities and countries are categorical.

Chi-square is "a statistical method used to test whether the classification of data can be ascribed to chance or to some underlying law" (Wordpanda). The chi-square test "is an analysis technique used to estimate whether two variables in a cross tabulation are correlated".

This is a supervised learning method where the output variable is a category, such as "Male" or "Female" or “Yes” and "No". Deciding whether an email message is spam or not classifies it between two categories and analysis of data about movies might lead to classification of them among several genres. Examples of classification algorithms are Logistic Regression, Decision Tree, K-NN, SVM etc.

Clustering is an unsupervised learning method used to discover the inherent groupings in the data. For example, grouping customers based on their purchasing behavior that is further used to segment the customers. Afterwards, the companies can use the appropriate marketing tactics to generate more profits. Example of clustering algorithms: K-Means, hierarchical clustering, etc.

A number or algebraic symbol prefixed as a multiplier to a variable or unknown quantity. When graphing an equation such as y = 3x + 4, the coefficient of x determines the line's slope. Discussions of statistics often mention specific coefficients for specific tasks such as the correlation coefficient, Cramer's coefficient, and the Gini coefficient.

Also, natural language processing, NLP. A branch of computer science for analyzing texts of spoken languages (for example, English or Mandarin) to convert it to structured data that you can use to drive program logic. Early efforts focused on translating one language to another or accepting complete sentences as queries to databases; modern efforts often analyze documents and other data (for example, tweets) to extract potentially valuable information.

A range specified around an estimate to indicate margin of error, combined with a probability that a value will fall in that range. The field of statistics offers specific mathematical formulas to calculate confidence intervals.

A confusion matrix is a table that is often used to describe the performance of a classification model. It is a N * N matrix, where N is the number of classes. We form the confusion matrix from the prediction of model classes Vs actual classes. The 2nd quadrant is called type II error or False Negatives, whereas 3rd quadrant is called type I error or False positives. The following table shows the confusion matrix for a two-class classifier.

The entries in the confusion matrix have the following meaning in the context of our study: a is the number of correct predictions that an instance is negative, b is the number of incorrect predictions that an instance is positive, c is the number of incorrect of predictions that an instance negative, and d is the number of correct predictions that an instance is positive.

A variable whose value can be any of an infinite number of values, typically within a particular range. For example, if you can express age or size with a decimal number, then they are continuous variables. In a graph, the value of a continuous variable is usually expressed as a line plotted by a function. Compare discrete variable.

"The degree of relative correspondence between two sets of data." If sales go up when the advertising budget goes up, they correlate. The correlation coefficient is a measure of how closely the two data sets correlate. A correlation coefficient of 1 is a perfect correlation, 0.9 is a strong correlation, and 0.2 is a weak correlation. A coefficient of 0 would show no correlation. This value can also be negative, as when the incidence of a disease goes down when vaccinations go up. A correlation coefficient of -1 is a perfect negative correlation. Always remember, though, that correlation does not imply causation.

"A measure of the relationship between two variables whose values are observed at the same time; specifically, the average value of the two variables diminished by the product of their average values". "Whereas variance measures how a single variable deviates from its mean, covariance measures how two variables vary in tandem from their means".

When using data with an algorithm, "the name given to a set of techniques that divide up data into training sets and test sets. The training set is given to the algorithm, along with the correct answers and it becomes the set used to make predictions. The algorithm is then asked to make predictions for each item in the test set. The answers it gives are compared to the correct answers, and an overall score for how well the algorithm did is calculated".

This low-level programming language focuses on software such as the components of an operating system or network protocols. It is frequently used in integrated systems and infrastructures that work as sensors. Although it can be a complicated language for beginners, it has a huge potential. It has very useful Machine Learning libraries such as LibSVM, Shark y MLPack.

D

A specialist in the management of data. "Data engineers are the ones that take the messy data... and build the infrastructure for real, tangible analysis. They run ETL software, marry data sets, enrich and clean all that data that companies have been storing for years".

Generally, the use of computers to analyze large data sets to look for patterns, allowing people make business decisions. Data mining is a study of extracting useful information from structured/unstructured data. Data Mining is used for Market Analysis, determining customer purchase pattern, financial planning, fraud detection, etc.

Data science is a combination of data analysis, algorithmic development statistics and software engineering in order to solve analytical problems. Data science work often requires knowledge of both. The main goal is a use of data to generate business value.

Read more

Also, data munging. The conversion of data, often using scripting languages, to make it easier to work with. This is a very time-consuming task.

A decision tree is a type of supervised learning algorithm (having a pre-defined target variable) that is mostly used in classification problems. It works for both categorical and continuous input & output variables. In this technique, we split the population (or sample) into two or more homogeneous sets (or sub-populations) based on most significant splitter/differentiator in input variables.

Typically, a multi-level algorithm that gradually identifies things at higher levels of abstraction. For example, the first level may identify certain lines, then the next level identifies combinations of lines as shapes, and then the next level identifies combinations of shapes as specific objects. As you might guess from this example, deep learning is popular for image classification.

Deep Learning is associated with a machine-learning algorithm (Artificial Neural Network, ANN) which uses the concept of the human brain to facilitate the modeling of arbitrary functions. ANN requires a vast amount of data and this algorithm is highly flexible when it comes to modelling multiple outputs simultaneously.

This library is written for Java and Scala and is dedicated to Deep Learning. It offers an environment in which developers can train and create AI modules.

The value of a dependent value “depends” on the value of the independent variable. If you're measuring the effect of different sizes of an advertising budget on total sales, then the advertising budget figure is the independent variable and total sales is the dependent variable.

This consists in the analysis of historical data and data that is collected in real time in order to generate Insights about how business strategies have been working in the past, for example, marketing campaigns.

Read more

Also, dimensionality reduction. "We can use a technique called principal component analysis to extract one or more dimensions that capture as much of the variation in the data as possible. For this purpose, linear algebra is involved; “broadly speaking, linear algebra is about translating something residing in an m-dimensional space into a corresponding shape in an n-dimensional space."

A variable whose potential values must be one of a specific number of values. If someone rates a movie with between one and five stars, with no partial stars allowed, the rating is a discrete variable. In a graph, the distribution of values for a discrete variable is usually expressed as a histogram.

E

EDA or exploratory data analysis is a phase used for data science pipeline in which the focus is to understand insights of the data through visualization or by statistical analysis. The steps involved in EDA are: Variable Identification: In this step, we identify the data type and category of variables, univariate analysis and multivariate analysis.

The purpose of evaluation metric is to measure the quality of the statistical / machine learning model. For example, below are a few evaluation metrics: ROC score, F-Score and Log-Loss.

F

The machine learning expression for a piece of measurable information about something. If you store the age, annual income, and weight of a set of people, you are storing three features about them. In other areas of the IT world, people may use the terms property, attribute, or field instead of "feature".

Feature Selection is a process of choosing those features that are required to explain the predictive power of a statistical model, and dropping out irrelevant features. This can be done by either filtering out less useful features or by combining features to make a new one.

G

"General Architecture for Text Engineering", is an open source, Java-based framework for natural language processing tasks. The framework lets you pipeline other tools designed to be plugged into it. The project is based at the UK's University of Sheffield.

H

Hadoop is an open-source project from the Apache Foundation, introduced in 2006 and developed in Java. It has to objective of offering a working environment that is appropriate for the demands of Big Data (the 4 V's). As such, Hadoop is designed to work with large Volumes of data, both structured and unstructured (Variety), and to process them in a secure and efficient way (Veracity and Velocity).

To achieve this, it distributes both the storage and processing of information between various computers working together in "clusters". These clusters have one or more master nodes charged with managing the distributed files where the information is stored in different blocks, as well as coordinating and executing the different tasks among the cluster's members. As such, it is a highly scalable system that also offers software "redundancy".

Read more

A graphical representation of the distribution of a set of numeric data, usually a vertical bar graph.

A practical and non-optimal solution to a problem, which is sufficient for making progress or for learning from.

A synthetic layer in a neural network between the input layer (the features) and the output layer (the prediction). A neural network contains one or more hidden layers.

This refers to examples intentionally not used ("held out") during training. The validation data set and test data set are examples of holdout data. Holdout data helps evaluate your model's ability to generalize data other than the data it was trained on. The loss on the holdout set provides a better estimate of the loss on an unseen data set than does the loss on the training set.

A boundary that separates a space into two subspaces. For example, a line is a hyperplane in two dimensions and a plane is a hyperplane in three dimensions. More typically, in machine learning, a hyperplane is the boundary separating a high-dimensional space. Kernel Support Vector Machines use hyperplanes to separate positive classes from negative classes, often in a very high-dimensional space.

I

Imputation is a technique used for handling missing values in the data. This is done either by statistical metrics like mean/mode imputation or by machine learning techniques like kNN imputation.

In inferential statistics, we try to hypothesize about the population by only looking at a sample of it. For example, before releasing a drug in the market, internal tests are done to check if the drug is viable for release. But here we cannot check with the whole population for viability of the drug, so we do it on a sample which best represents the population.

The concept "data insight" means the knowledge or deep understanding of the data in a way that can guide correct and productive business actions. "Data-driven" companies are those that make decisions based on data, in particular, on data insights (data-based decisions). LUCA solutions help companies become Data Driven companies.

The degree to which a model's predictions can be readily explained. Deep models are often uninterpretable; that is, a deep model's different layers can be hard to decipher.

J

This is one of the most commonly used programming languages for Machine Learning, due to its consistency, clarity and flexibility. It is an open-source language that is compatible with any platform and practically any application. It features a large number of libraries, some of which are focused on the world of Machine Learning such as Spark+MLlib, Mahout y Deeplearning4j.

K

A popular Python machine learning API. Keras runs on several deep learning frameworks, including TensorFlow, where it is made available as tf.keras.

It is a type of unsupervised algorithm which solves the clustering problem. It is a procedure which follows a simple and easy way to classify a given data set through a certain number of clusters (assume k clusters). Data points inside a cluster are homogeneous and heterogeneous to peer groups.

K nearest neighbors is a simple algorithm that stores all available cases and classifies new cases by a majority vote of its k neighbors. The case being assigned to the class is most common amongst its K nearest neighbors measured by a distance function.

Kurtosis is a descriptor of the shape of a probability distribution and is explained in terms of the central peak. Higher values of kurtosis indicate a higher, sharper peak; lower values indicate a lower, less distinct peak.

L

A library is nothing more than a collection of modules (see modules). Python's standard library is very broad and offers a wide range of modules that can carry out a number of functions. This includes modules written in C that allow access to system functionalities such as accessing files (file I/O). On Python’s website, one can find all the available modules in "The Python Standard Library". The installers for Python for Windows normally include the complete standard library, as well as some additional components. However, when installing Python packages, one will need specific installers.

This C++ library can easily be used to work with Support Vector Machines (SVMs). It is used to solve classification and regression problems.

In data mining, Lift compares the frequency of an observed pattern with how often you’d expect to see that pattern just by chance. If the lift is near 1, then there’s a good chance that the pattern you observed is occurring just by chance. The larger the lift, the more likely that the pattern is 'real'.

A branch of mathematics dealing with vector spaces and operations within them such as addition and multiplication. "Linear algebra is designed to represent systems of linear equations. Linear equations are designed to represent linear relationships, where one entity is written to be a sum of multiples of other entities. In the shorthand of linear algebra, a linear relationship is represented as a linear operator—a matrix".[zheng].

A technique to look for a linear relationship (that is, one where the relationship between two varying amounts, such as price and sales, can be expressed with an equation that you can represent as a straight line on a graph) by starting with a set of data points that don't necessarily line up nicely. This is done by computing the "least squares" line: the one that has, on an x-y graph, the smallest possible sum of squared distances to the actual data point y values. Statistical software packages and even typical spreadsheet packages offer automated ways to calculate this.

An acronym of List Procesor, this language was created by John McCarthy, now seen by many as the father of Artificial Intelligence. His idea was to optimize the functioning and use of the resources that the computers of the era had available to them. The new language, based in part on the already existing Fortran, used some innovative technique such as data “trees” (a hierarchical data structure), or the use of "symbolic computation" (also known as "computer algebra"), from which symbolic programming would later been born. Lisp did not take long in becoming the favourite language of the world of Artificial Intelligence.

The logarithm of a number is the exponent to which another fixed number, the base, must be raised to produce that number. b y = x {b^{y}=x} Working with the log of one or more of a model's variables, instead of their original values, can make it easier to model relationships with linear functions instead of non-linear ones. Linear functions are typically easier to use in data analysis.

A model similar to linear regression but where the potential results are a specific set of categories instead of being continuous.

M

Machine Learning refers to the techniques involved in dealing with vast data in the most intelligent fashion (by developing algorithms) to derive actionable insights. In these techniques, we expect the algorithms to learn by themselves without being explicitly programmed.

Read more

This Java library is very similar to Python’s NumPy. It focuses on mathematical, algebraic and statistical expressions.

A commercial computer language and environment popular for visualization and algorithm development.

This Python library is used to create a variety of graphics: from histograms to line graphs and heat maps. It also allows you to use LaTeX commands to add mathematical expressions to a graph.

Mean: The average value, although technically that is known as the "arithmetic mean". (Other means include the geometric and harmonic means.)

Median: When values are sorted, the value in the middle, or the average of the two in the middle if there are an even number of values.

Mode: "The value that occurs most often in a sample of data. Like the median, the mode cannot be directly calculate".

This C++ library aims to quickly start up Machine Learning algorithms. It integrates the algorithms into solutions at a larger scale by using simple lines of code.

Python used modules to store definitions (of instructions or variables) in a file, in a way that can later be used in a script or in the case of interactive Python interpreters. Therefore, one doesn’t have to define these things each time. The main advantage of dividing a program into modules is that one can reuse them in other programs or modules. For this, one will have to import the relevant modules for each situation. Python comes with a collection of standard modules that one can user as a starting point for a new program or as examples to begin learning with.

N

"A collection of classification algorithms based on Bayes Theorem. It is not a single algorithm but a family of algorithms that all share a common principle, that every feature being classified is independent of the value of any other feature".

A model that, taking inspiration from the brain, is composed of layers (at least one of which is hidden) consisting of simple connected units or neurons followed by nonlinearities. Neural Networks are used in deep learning research to match images to features and much more. What makes Neural Networks special is their use of a hidden layer of weighted functions called neurons, with which you can effectively build a network that maps a lot of other functions. Without a hidden layer of functions, Neural Networks would be just a set of simple weighted functions.

Also, Gaussian distribution. A probability distribution which, when graphed, is a symmetrical bell curve with the mean value at the center. The standard deviation value affects the height and width of the graph. An important fact about this bell shaped curve is that you can model many natural, social and psychological phenomena. These phenomena may be affected by random variables, but the summary statistics you calculate from your samples, in fact do follow a normal distribution. The Gauss curve also makes math easy.

Traditional database systems, known as RDBMSS, are largely dependent on rows, columns, schemas, and tables, to retrieve and organize data stored in databases. To do this, they use a structured SQL query language. These systems have some problems working with Big Data such as non-scalability, lack of flexibility and performance problems. NoSQL non-relational databases are much more flexible. They allow you to work with unstructured data, such as chat data, messaging, log data, user and session data, large data such as videos and images, as well as the Internet of things and device data. They are also designed to achieve high storage volume capacity, through distributed data storage, and information processing speed. Therefore, they are very scalable. They are also independent of the programming language. NoSQL databases are open source, therefore its cost is affordable, but as a counterpart, it generates problems of lack of standardization and interoperability. Some NoSQL databases available on the market are Couchbase, Amazon Dynamo Db, MongoDB and MarkLogic etc.

A portmanteau of Numerical + Python, NumPy is the main Python library for scientific computation. One of its most powerful traits is that it can work with arrays on "n" dimensions. It also offers basic linear algebra functions, the Fourier transform, advanced random number capabilities and tools that allow it to be integrated with other low-level languages such as Fortran, C and C++.

O

"Extreme values that might be errors in measurement and recording, or might be accurate reports of rare events".

A model of training data that, by taking too many of the data's quirks and outliers into account, is overly complicated and will not be as useful as it could be to find patterns in test data.

P

A Python library for data manipulation popular with data scientists. See also Python.

The perceptron is the simplest neural network, which approximates a single neuron with n binary inputs. It computes a weighted sum of its inputs and ‘fires’ if that weighted sum is zero or greater.

Perl is a scripting language rooted in pre-Linux UNIX systems. Perl has always been used for especially data cleanup and enhancement tasks in text processing.

Pivot tables summarize long lists of data, without requiring you to write a single formula or copy a single cell. However, the most notable feature of pivot tables is that you can arrange them dynamically. The process of rearranging your table is known as pivoting your data: you're turning the same information around to examine it from different angles.

A distribution of independent events, usually over a period of time or space, used to help predict the probability of an event. Like the binomial distribution, this is a discrete distribution. Named for early 19th century French mathematician Siméon Denis Poisson. This distribution is appropriate when events occur independently, the ratio of their occurrence is constant, they cannot take place simultaneously, and the probability of occurrence of an event in a small subinterval is proportional to its length, or is given by a binomial distribution and the number of trials is sufficiently greater than that of the events to be predicted.

Precision is a metric for classifications models that answers the following question: Out of all the possible positive labels, how many did the model correctly identify? It can be represented as: Precision = Total Positives / (TP + FP). It represents how near the actual value is to the one obtained from the model or measurement. It is also known as "True Positive Rate".

Recall is described as the measured of how many of the positive predictions are correct. It can be represented as: Recall = TP / (TP + FN). Both precision and recall are therefore based on an understanding and measure of relevance. High precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results.

The analysis of data to predict future events, typically to aid in business planning. This incorporates predictive modeling and other techniques. Machine learning can be considered as a set of algorithms to help implement predictive analytics. The more business-oriented spin of "predictive analytics" makes it a popular buzz phrase in marketing literature. See also predictive modeling, machine learning and SPSS.

Read more

It consists of the analysis of historical business data in order to predict future behaviors that help to better planning. To do this, predictive modeling techniques are used, among others. These techniques are based on statistical algorithms and machine learning.

Read more

This is one of the most common algorithms for choosing characteristics. It consists a technique that uses an orthogonal transformation in order to convert a collection of observations of variables (that may be correlated), into a reduced collection of variables that are not correlated and are known as “principal components”. (See “characteristic”).

The probability distribution of a discrete random variable is the set of all the possible values that this variable can have, together with its probabilities of occurrence. For discrete variables, the main probability distributions are the binomial, the Poisson and the hypergeometric (the latter for dependent events). For continuous variable, the distribution that is generated is normal or Gaussian.

It is a programming language created in 1994 and is widely used in data science. For beginners, it is very easy to learn, but at the same time it is a very powerful language for advanced users, since it has specialized libraries for automatic learning and graphics generation.

Read more

Q

R

An open-source programming language and environment for statistical computing and graph generation available for Linux, Windows, and Mac.

An algorithm used for regression or classification tasks that is based on a combination of predictive trees. "To classify a new object from an input vector, each of the trees in the forest is fed with that vector. Each tree offers a classification as a result, and we say "vote" for that result. The forest chooses the classification that has the most votes among all the trees in the forest. The term "random forest" is a trademark registered by its authors.

It is a supervised learning method where the output variable is a real and continuous value, such as "height" or "weight". Regression consists of fitting any data set to a given model. Within the regression algorithms we can find the linear, non-linear regression, by least squares, Lasso, etc.

Based on studies on how to encourage learning in humans and rats based on rewards and punishments. The algorithm learns by observing the world around it. Your input information is the feedback you get from the outside world in response to your actions. Therefore, the system learns based on trial-error.

It is a scripting language created in 1996. It is widely used among data scientists, but it is not as popular as Python, since it offers more specialized libraries for the different tasks of Data Science.

S

A commercial statistical software suite that includes a programming language.

A variable is scalar (as opposed to vectorial), when it has a value of magnitude but no direction in space, such as volume or temperature.

This Python library is built upon NumPy, SciPy and matplotlib. It contains a large number of efficient tools for Machine Learning and statistical modeling such as classification algorithms, regression, clustering and dimensionality reduction.

A portmanteau of Scientific Python, SciPy is a Python library that is built upon the NumPy library of scientific computation. It is one of the most useful due to its large variety of high-level modules of science and engineering, such as the Discrete Fourier Transform, linear algebra, and optimization matrices.

This Python library is used to trawl the web. It a very useful environment when wanting to obtain a given data pattern. It can trawl various pages of a website, by using the URL of the homepage, in order to collect information.

Script programming languages can be executed directly without the need to compile them before in binary code, as is the case with languages such as Java and C. The syntax of scripting languages is much simpler than that of compiled languages, making it much easier the tasks of programming and execution. Some examples of this type of languages are Python, Perl, Rubi etc.

This Python library, built upon matplotlib, is used to create attractive graphics and statistic information in Python. Its objective is to give greater relevance to visualizations, within the tasks of exploring and interpreting data.

It consists of the relative correspondence between two data sets. If sales go up when the advertising budget increases, it means that both facts are correlated. The correlation coefficient measures to what extent two sets of data are correlated. A coefficient of value "1" implies a perfect correlation, 0.9 is a strong correlation and 0.2 a weak correlation. This value can also be negative, such as when the incidence of a disease is reduced by increasing the rate of vaccination against it. A coefficient "-1" is a perfect negative correlation. However, we must never forget that correlation does not imply causation.

This C++ library offers linear and non-linear optimization methods. It is based on kernel methods, neural networks and other advanced machine learning techniques. It is compatible with the majority of operating systems (OS).

When the operating system is accessed from the command line we are using the console. In addition to script languages such as Perl and Python, it is common to use Linux-based tools such as grep, diff, splitt, comm, head and tail to perform data preparation and debugging tasks from the console.

This Java library works perfectly with Spark APIs and works with NumPy. Spark speeds up the MLlib functionality, which aims to carry out a scalable and easier learning process.

Time series data that also includes geographic identifiers such as latitude-longitude pairs.

This Python module is used of statistical modeling. It allows users to explore data, makes statistical estimations and carry out statistical tests. It offers an extensive list of descriptive statistics as well as tests and graphical functions for different types of data and estimations.

It consists in dividing the population samples into homeogenic groups or strata and taking a random sample of each of them. Strata is also an O'Reilly conference on Big Data, Data Science and related technologies.

In supervised learning, the algorithms work with "tagged" data (labeled data), trying to find a function that, given the input variables, assign them the appropriate output tag. The algorithm is trained with a "historical" data and thus "learns" to assign the appropriate output label to a new value, that is, it predicts the output value. Supervised learning is often used in classification problems, such as identifying digits, diagnosing, or detecting identity fraud.

A support vector machine is a supervised machine learning algorithm that is used for both classification and regression tasks. They are based on the idea of finding the hyperplane that best divides the data set into two differentiated classes. Intuitively, the farther away from the hyperplane our values are, the more certain we are that they are correctly classified. However, sometimes it is not easy to find the hyperplane that best classifies the data and it is necessary to jump to a larger dimension (from the plane to 3 dimensions or even n dimensions). SVMs are used for tasks of text classification, spam detection, sentiment analysis, etc. They are also used for image recognition.

This Python library is used for symbolic computation, including arithmetics, calculus, algebra, discrete mathematics and quantum physics. It can also format the results of these calculationsin LaTeX code.

An expert system is one that uses human understanding, captured in a computer, in order to solve problems that human experts would normally solve. Well-designed systems imitate the reasoning processes that experts use to solve specific problems. These systems can work better than any human expert when making individual decisions and can be used by inexperienced humans to improve their problem-solving skills.

T

They are a variation of the normal distributions. They were discovered by William Gosset in 1908 and published under the pseudonym "Student". He needed a distribution that he could use when the sample size was small and the variance was unknown and had to be estimated from the data. The t distributions are used to take into account the added uncertainty that results from this estimate.

A time series is a sequence of measures spaced in time intervals not necessarily equal. Thus time series consist of a measure (for example, atmospheric pressure or price of an action) accompanied by a temporary seal.

U

The "Unstructured Information Management Architecture" was developed by IBM as an environment for analyzing unstructured data, especially natural language. OASIS UIMA is a specification that standardizes this environment and Apache UIMA is an open source implementation of this. This environment allows you to work with different tools designed to connect with it.

Unsupervised learning occurs when "labeled" data is not available for training. We only know the input data, but there is no output data that corresponds to a certain input. Therefore, we can only describe the structure of the data, and try to find some kind of organization that simplifies the analysis. Therefore, they have an exploratory character.

V

The mathematical definition of a vector is "a value that has a magnitude and a direction, represented by an arrow whose length represents the magnitude and whose orientation in space represents the direction". However, data scientists use the term in this sense: "ordered set of real numbers denoting a distance on a coordinate axis. These numbers can represent characteristics of a person, movie, product or whatever we want to model. This mathematical representation of the variables allows working with software libraries that apply advanced mathematical operations to the data. A vector space is a set of vectors, for example, a matrix.

W

Weka is a set of automatic learning algorithms to perform data analytics tasks. The algorithms can be applied directly to a data set or be called from your own Java code. Weka offers tools for data pre-processing, classification, regression, clustering, association and visualization rules. It is also appropriate for the development of new machine learning models. Weka is an open source software developed by the University of Waikato in New Zealand.

X

Y

Z