Semantic Scholar Open Access 1995 1356 sitasi

Inducing Features of Random Fields

S. D. Pietra V. D. Pietra J. Lafferty

Abstrak

We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing.

Topik & Kata Kunci

Penulis (3)

S

S. D. Pietra

V

V. D. Pietra

J

J. Lafferty

Format Sitasi

Pietra, S.D., Pietra, V.D., Lafferty, J. (1995). Inducing Features of Random Fields. https://doi.org/10.1109/34.588021

Akses Cepat

Lihat di Sumber doi.org/10.1109/34.588021
Informasi Jurnal
Tahun Terbit
1995
Bahasa
en
Total Sitasi
1356×
Sumber Database
Semantic Scholar
DOI
10.1109/34.588021
Akses
Open Access ✓