Вам бонус- начислено 1 монета за дневную активность. Сейчас у вас 1 монета

Обучение с частичным привлечением учителя

Лекция



Сразу хочу сказать, что здесь никакой воды про обучение с частичным привлечением учителя, и только нужная информация. Для того чтобы лучше понимать что такое обучение с частичным привлечением учителя , настоятельно рекомендую прочитать все из категории Машинное обучение.

обучение с частичным привлечением учителя

для части прецедентов задается пара «ситуация, требуемое решение», а для части — только «ситуация»

 

Semi-supervised learning is a class of supervised learning tasks and techniques that also make use of unlabeled data for training - typically a small amount of labeled data with a large amount of unlabeled data. Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy. The acquisition of labeled data for a learning problem often requires a skilled human agent (e.g. to transcribe an audio segment) or a physical experiment (e.g. determining the 3D structure of a protein or determining whether there is oil at a particular location). The cost associated with the labeling process thus may render a fully labeled training set infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value. Semi-supervised learning is also of theoretical interest in machine learning and as a model for human learning.

As in the supervised learning framework, we are given a set of Обучение с частичным привлечением учителя independently identically distributed examples Обучение с частичным привлечением учителя with corresponding labels Обучение с частичным привлечением учителя. Additionally, we are given Обучение с частичным привлечением учителя unlabeled examples Обучение с частичным привлечением учителя. Semi-supervised learning attempts to make use of this combined information to surpass the classification performance that could be obtained either by discarding the unlabeled data and doing supervised learning or by discarding the labels and doing unsupervised learning.

Semi-supervised learning may refer to either transductive learning or inductive learning. The goal of transductive learning is to infer the correct labels for the given unlabeled data Обучение с частичным привлечением учителя only. The goal of inductive learning is to infer the correct mapping from Обучение с частичным привлечением учителя to Обучение с частичным привлечением учителя.

Intuitively, we can think of the learning problem as an exam and labeled data as the few example problems that the teacher solved in class. The teacher also provides a set of unsolved problems. In the transductive setting, these unsolved problems are a take-home exam and you want to do well on them in particular. In the inductive setting, these are practice problems of the sort you will encounter on the in-class exam.

It is unnecessary (and, according to Vapnik's principle, imprudent) to perform transductive learning by way of inferring a classification rule over the entire input space; however, in practice, algorithms formally designed for transduction or induction are often used interchangeably.

 

Contents

  
  • 1 Assumptions used in semi-supervised learning
    • 1.1 Smoothness assumption
    • 1.2 Cluster assumption
    • 1.3 Manifold assumption
  • 2 History
  • 3 Methods for semi-supervised learning
    • 3.1 Generative models
    • 3.2 Low-density separation
    • 3.3 Graph-based methods
    • 3.4 Heuristic approaches
  • 4 Semi-supervised learning in human cognition
  • 5 See also
  • 6 References
  • 7 External links

 

Assumptions used in semi-supervised learning

In order to make any use of unlabeled data, we must assume some structure to the underlying distribution of data. Semi-supervised learning algorithms make use of at least one of the following assumptions. 

Smoothness assumption

Points which are close to each other are more likely to share a label. This is also generally assumed in supervised learning and yields a preference for geometrically simple decision boundaries. In the case of semi-supervised learning, the smoothness assumption additionally yields a preference for decision boundaries in low-density regions, so that there are fewer points close to each other but in different classes.

Cluster assumption

The data tend to form discrete clusters, and points in the same cluster are more likely to share a label (although data sharing a label may be spread across multiple clusters). This is a special case of the smoothness assumption and gives rise to feature learning with clustering algorithms.

Manifold assumption

The data lie approximately on a manifold of much lower dimension than the input space. In this case we can attempt to learn the manifold using both the labeled and unlabeled data to avoid the curse of dimensionality. Об этом говорит сайт https://intellect.icu . Then learning can proceed using distances and densities defined on the manifold.

The manifold assumption is practical when high-dimensional data are being generated by some process that may be hard to model directly, but which only has a few degrees of freedom. For instance, human voice is controlled by a few vocal folds  , and images of various facial expressions are controlled by a few muscles. We would like in these cases to use distances and smoothness in the natural space of the generating problem, rather than in the space of all possible acoustic waves or images respectively.

History

The heuristic approach of self-training (also known as self-learning or self-labeling) is historically the oldest approach to semi-supervised learning,  with examples of applications starting in the 1960s (see for instance Scudder (1965) ).

The transductive learning framework was formally introduced by Vladimir Vapnik in the 1970s.  Interest in inductive learning using generative models also began in the 1970s. A probably approximately correct learning bound for semi-supervised learning of a Gaussian mixture was demonstrated by Ratsaby and Venkatesh in 1995 

Semi-supervised learning has recently become more popular and practically relevant due to the variety of problems for which vast quantities of unlabeled data are available—e.g. text on websites, protein sequences, or images. For a review of recent work see a survey article by Zhu (2008).

Methods for semi-supervised learning

Generative models

Generative approaches to statistical learning first seek to estimate Обучение с частичным привлечением учителя, the distribution of data points belonging to each class. The probability Обучение с частичным привлечением учителя that a given point Обучение с частичным привлечением учителя has label Обучение с частичным привлечением учителя is then proportional to Обучение с частичным привлечением учителя byBayes' rule. Semi-supervised learning with generative models can be viewed either as an extension of supervised learning (classification plus information about Обучение с частичным привлечением учителя) or as an extension of unsupervised learning (clustering plus some labels).

Generative models assume that the distributions take some particular form Обучение с частичным привлечением учителя parameterized by the vector Обучение с частичным привлечением учителя. If these assumptions are incorrect, the unlabeled data may actually decrease the accuracy of the solution relative to what would have been obtained from labeled data alone.   However, if the assumptions are correct, then the unlabeled data necessarily improves performance.

The unlabeled data are distributed according to a mixture of individual-class distributions. In order to learn the mixture distribution from the unlabeled data, it must be identifiable, that is, different parameters must yield different summed distributions. Gaussian mixture distributions are identifiable and commonly used for generative models.

The parameterized joint distribution can be written as Обучение с частичным привлечением учителя by using the Chain rule. Each parameter vector Обучение с частичным привлечением учителя is associated with a decision function Обучение с частичным привлечением учителя. The parameter is then chosen based on fit to both the labeled and unlabeled data, weighted by Обучение с частичным привлечением учителя:

Обучение с частичным привлечением учителя

Low-density separation

Another major class of methods attempts to place boundaries in regions where there are few data points (labeled or unlabeled). One of the most commonly used algorithms is the transductive support vector machine, or TSVM (which, despite its name, may be used for inductive learning as well). Whereas support vector machines for supervised learning seek a decision boundary with maximal margin over the labeled data, the goal of TSVM is a labeling of the unlabeled data such that the decision boundary has maximal margin over all of the data. In addition to the standard hinge loss Обучение с частичным привлечением учителя for labeled data, a loss function Обучение с частичным привлечением учителя is introduced over the unlabeled data by letting Обучение с частичным привлечением учителя. TSVM then selects Обучение с частичным привлечением учителя from a reproducing kernel Hilbert space Обучение с частичным привлечением учителя by minimizing the regularized empirical risk:

f^* = \underset{f}{\operatorname{argmin}}\left( 
\displaystyle \sum_{i=1}^l(1-y_if(x_i))_+ + \lambda_1 ||h||_\mathcal{H}^2 + \lambda_2 \sum_{i=l+1}^{l+u} (1-|f(x_i)|)_+
\right)

An exact solution is intractable due to the non-convex term Обучение с частичным привлечением учителя, so research has focused on finding useful approximations.

Other approaches that implement low-density separation include Gaussian process models, information regularization, and entropy minimization (of which TSVM is a special case).

Graph-based methods

Graph-based methods for semi-supervised learning use a graph representation of the data, with a node for each labeled and unlabeled example. The graph may be constructed using domain knowledge or similarity of examples; two common methods are to connect each data point to its Обучение с частичным привлечением учителя nearest neighbors or to examples within some distance Обучение с частичным привлечением учителя. The weight Обучение с частичным привлечением учителя of an edge between Обучение с частичным привлечением учителя and Обучение с частичным привлечением учителя is then set to Обучение с частичным привлечением учителя.

Within the framework of manifold regularization  [10] the graph serves as a proxy for the manifold. A term is added to the standard Tikhonov regularization problem to enforce smoothness of the solution relative to the manifold (in the intrinsic space of the problem) as well as relative to the ambient input space. The minimization problem becomes

\underset{f\in\mathcal{H}}{\operatorname{argmin}}\left(
\frac{1}{l}\displaystyle\sum_{i=1}^l V(f(x_i),y_i) + 
\lambda_A ||f||^2_\mathcal{H} + 
\lambda_I \int_\mathcal{M}f(x)||\nabla_\mathcal{M} f(x)||^2dp(x)
\right)  

where Обучение с частичным привлечением учителя is a reproducing kernel Hilbert space and Обучение с частичным привлечением учителя is the manifold on which the data lie. The regularization parameters Обучение с частичным привлечением учителя and Обучение с частичным привлечением учителя control smoothness in the ambient and intrinsic spaces respectively. The graph is used to approximate the intrinsic regularization term. Defining the graph Laplacian Обучение с частичным привлечением учителя where Обучение с частичным привлечением учителя and Обучение с частичным привлечением учителя the vector Обучение с частичным привлечением учителя, we have

Обучение с частичным привлечением учителя.

The Laplacian can also be used to extend the supervised learning algorithms: regularized least squares and support vector machines (SVM) to semi-supervised versions Laplacian regularized least squares and Laplacian SVM.

Heuristic approaches

Some methods for semi-supervised learning are not intrinsically geared to learning from both unlabeled and labeled data, but instead make use of unlabeled data within a supervised learning framework. For instance, the labeled and unlabeled examples Обучение с частичным привлечением учителя may inform a choice of representation, distance metric, or kernel for the data in an unsupervised first step. Then supervised learning proceeds from only the labeled examples.

Self-training is a wrapper method for semi-supervised learning. First a supervised learning algorithm is used to select a classifier based on the labeled data only. This classifier is then applied to the unlabeled data to generate more labeled examples as input for another supervised learning problem. Generally only the labels the classifier is most confident of are added at each step.

Co-training is an extension of self-training in which multiple classifiers are trained on different (ideally disjoint) sets of features and generate labeled examples for one another.

Semi-supervised learning in human cognition

Human responses to formal semi-supervised learning problems have yielded varying conclusions about the degree of influence of the unlabeled data (for a summary see [11]). More natural learning problems may also be viewed as instances of semi-supervised learning. Much of human concept learning involves a small amount of direct instruction (e.g. parental labeling of objects during childhood) combined with large amounts of unlabeled experience (e.g. observation of objects without naming or counting them, or at least without feedback).

Human infants are sensitive to the structure of unlabeled natural categories such as images of dogs and cats or male and female faces.[12] More recent work has shown that infants and children take into account not only the unlabeled examples available, but thesampling process from which labeled examples arise.[13][14]

See also[edit]

А как ты думаешь, при улучшении обучение с частичным привлечением учителя, будет лучше нам? Надеюсь, что теперь ты понял что такое обучение с частичным привлечением учителя и для чего все это нужно, а если не понял, или есть замечания, то не стесняйся, пиши или спрашивай в комментариях, с удовольствием отвечу. Для того чтобы глубже понять настоятельно рекомендую изучить всю информацию из категории Машинное обучение

создано: 2014-08-21
обновлено: 2021-03-13
132515



Рейтиг 9 of 10. count vote: 2
Вы довольны ?:


Найди готовое или заработай

С нашими удобными сервисами без комиссии*

Как это работает? | Узнать цену?

Найти исполнителя
$0 / весь год.
  • У вас есть задание, но нет времени его делать
  • Вы хотите найти профессионала для выплнения задания
  • Возможно примерение функции гаранта на сделку
  • Приорететная поддержка
  • идеально подходит для студентов, у которых нет времени для решения заданий
Готовое решение
$0 / весь год.
  • Вы можите продать(исполнителем) или купить(заказчиком) готовое решение
  • Вам предоставят готовое решение
  • Будет предоставлено в минимальные сроки т.к. задание уже готовое
  • Вы получите базовую гарантию 8 дней
  • Вы можете заработать на материалах
  • подходит как для студентов так и для преподавателей
Я исполнитель
$0 / весь год.
  • Вы профессионал своего дела
  • У вас есть опыт и желание зарабатывать
  • Вы хотите помочь в решении задач или написании работ
  • Возможно примерение функции гаранта на сделку
  • подходит для опытных студентов так и для преподавателей



Комментарии


Оставить комментарий
Если у вас есть какое-либо предложение, идея, благодарность или комментарий, не стесняйтесь писать. Мы очень ценим отзывы и рады услышать ваше мнение.
To reply

Машинное обучение

Термины: Машинное обучение