000 | 05272nam a22005775i 4500 | ||
---|---|---|---|
001 | 978-3-540-72927-3 | ||
003 | DE-He213 | ||
005 | 20240423125813.0 | ||
007 | cr nn 008mamaa | ||
008 | 100301s2007 gw | s |||| 0|eng d | ||
020 |
_a9783540729273 _9978-3-540-72927-3 |
||
024 | 7 |
_a10.1007/978-3-540-72927-3 _2doi |
|
050 | 4 | _aQ334-342 | |
050 | 4 | _aTA347.A78 | |
072 | 7 |
_aUYQ _2bicssc |
|
072 | 7 |
_aCOM004000 _2bisacsh |
|
072 | 7 |
_aUYQ _2thema |
|
082 | 0 | 4 |
_a006.3 _223 |
245 | 1 | 0 |
_aLearning Theory _h[electronic resource] : _b20th Annual Conference on Learning Theory, COLT 2007, San Diego, CA, USA, June 13-15, 2007, Proceedings / _cedited by Nader Bshouty, Claudio Gentile. |
250 | _a1st ed. 2007. | ||
264 | 1 |
_aBerlin, Heidelberg : _bSpringer Berlin Heidelberg : _bImprint: Springer, _c2007. |
|
300 |
_aXII, 636 p. _bonline resource. |
||
336 |
_atext _btxt _2rdacontent |
||
337 |
_acomputer _bc _2rdamedia |
||
338 |
_aonline resource _bcr _2rdacarrier |
||
347 |
_atext file _bPDF _2rda |
||
490 | 1 |
_aLecture Notes in Artificial Intelligence, _x2945-9141 ; _v4539 |
|
505 | 0 | _aInvited Presentations -- Property Testing: A Learning Theory Perspective -- Spectral Algorithms for Learning and Clustering -- Unsupervised, Semisupervised and Active Learning I -- Minimax Bounds for Active Learning -- Stability of k-Means Clustering -- Margin Based Active Learning -- Unsupervised, Semisupervised and Active Learning II -- Learning Large-Alphabet and Analog Circuits with Value Injection Queries -- Teaching Dimension and the Complexity of Active Learning -- Multi-view Regression Via Canonical Correlation Analysis -- Statistical Learning Theory -- Aggregation by Exponential Weighting and Sharp Oracle Inequalities -- Occam’s Hammer -- Resampling-Based Confidence Regions and Multiple Tests for a Correlated Random Vector -- Suboptimality of Penalized Empirical Risk Minimization in Classification -- Transductive Rademacher Complexity and Its Applications -- Inductive Inference -- U-Shaped, Iterative, and Iterative-with-Counter Learning -- Mind Change Optimal Learning of Bayes Net Structure -- Learning Correction Grammars -- Mitotic Classes -- Online and Reinforcement Learning I -- Regret to the Best vs. Regret to the Average -- Strategies for Prediction Under Imperfect Monitoring -- Bounded Parameter Markov Decision Processes with Average Reward Criterion -- Online and Reinforcement Learning II -- On-Line Estimation with the Multivariate Gaussian Distribution -- Generalised Entropy and Asymptotic Complexities of Languages -- Q-Learning with Linear Function Approximation -- Regularized Learning, Kernel Methods, SVM -- How Good Is a Kernel When Used as a Similarity Measure? -- Gaps in Support Vector Optimization -- Learning Languages with Rational Kernels -- Generalized SMO-Style Decomposition Algorithms -- Learning Algorithms and Limitations on Learning -- Learning Nested Halfspaces and UphillDecision Trees -- An Efficient Re-scaled Perceptron Algorithm for Conic Systems -- A Lower Bound for Agnostically Learning Disjunctions -- Sketching Information Divergences -- Competing with Stationary Prediction Strategies -- Online and Reinforcement Learning III -- Improved Rates for the Stochastic Continuum-Armed Bandit Problem -- Learning Permutations with Exponential Weights -- Online and Reinforcement Learning IV -- Multitask Learning with Expert Advice -- Online Learning with Prior Knowledge -- Dimensionality Reduction -- Nonlinear Estimators and Tail Bounds for Dimension Reduction in l 1 Using Cauchy Random Projections -- Sparse Density Estimation with ?1 Penalties -- ?1 Regularization in Infinite Dimensional Feature Spaces -- Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking -- Other Approaches -- Observational Learning in Random Networks -- The Loss Rank Principle for Model Selection -- Robust Reductions from Ranking to Classification -- Open Problems -- Rademacher Margin Complexity -- Open Problems in Efficient Semi-supervised PAC Learning -- Resource-Bounded Information Gathering for Correlation Clustering -- Are There Local Maxima in the Infinite-Sample Likelihood of Gaussian Mixture Estimation? -- When Is There a Free Matrix Lunch?. | |
650 | 0 | _aArtificial intelligence. | |
650 | 0 | _aComputer science. | |
650 | 0 | _aAlgorithms. | |
650 | 0 | _aMachine theory. | |
650 | 1 | 4 | _aArtificial Intelligence. |
650 | 2 | 4 | _aTheory of Computation. |
650 | 2 | 4 | _aAlgorithms. |
650 | 2 | 4 | _aFormal Languages and Automata Theory. |
700 | 1 |
_aBshouty, Nader. _eeditor. _4edt _4http://id.loc.gov/vocabulary/relators/edt |
|
700 | 1 |
_aGentile, Claudio. _eeditor. _4edt _4http://id.loc.gov/vocabulary/relators/edt |
|
710 | 2 | _aSpringerLink (Online service) | |
773 | 0 | _tSpringer Nature eBook | |
776 | 0 | 8 |
_iPrinted edition: _z9783540729259 |
776 | 0 | 8 |
_iPrinted edition: _z9783540839231 |
830 | 0 |
_aLecture Notes in Artificial Intelligence, _x2945-9141 ; _v4539 |
|
856 | 4 | 0 | _uhttps://doi.org/10.1007/978-3-540-72927-3 |
912 | _aZDB-2-SCS | ||
912 | _aZDB-2-SXCS | ||
912 | _aZDB-2-LNC | ||
942 | _cSPRINGER | ||
999 |
_c181878 _d181878 |