It looks like you're offline.
Open Library logo
additional options menu

MARC Record from marc_columbia

Record ID marc_columbia/Columbia-extract-20221130-013.mrc:135018504:3225
Source marc_columbia
Download Link /show-records/marc_columbia/Columbia-extract-20221130-013.mrc:135018504:3225?format=raw

LEADER: 03225cam a22004574a 4500
001 6162875
005 20221122002108.0
008 061012t20072007maua b 001 0 eng
010 $a 2006033527
015 $aGBA713858$2bnb
016 7 $a013677733$2Uk
020 $a9780262083607 (hardcover : alk. paper)
020 $a0262083604 (hbk.)
029 1 $aYDXCP$b2492450
035 $a(OCoLC)ocm73926873
035 $a(DLC) 2006033527
035 $a(NNC)6162875
035 $a6162875
040 $aDLC$cDLC$dYDX$dBAKER$dBTCTA$dUKM$dYDXCP$dOrLoB-B
050 00 $aBC177$b.H377 2007
082 00 $a161$222
100 1 $aHarman, Gilbert.$0http://id.loc.gov/authorities/names/n50026015
245 10 $aReliable reasoning :$binduction and statistical learning theory /$cGilbert Harman and Sanjeev Kulkarni.
260 $aCambridge, Mass. :$bMIT Press,$c[2007], ©2007.
300 $ax, 108 pages :$billustrations ;$c21 cm.
336 $atext$btxt$2rdacontent
337 $aunmediated$bn$2rdamedia
490 1 $aThe Jean Nicod lectures ;$v2007
505 00 $g1.$tThe problem of induction -- $g2.$tInduction and VC dimension -- $g3.$tInduction and "simplicity" -- $g4.$tNeural networks, support vector machines, and transduction.
500 $a"A Bradford book."
504 $aIncludes bibliographical references (p. [99]-104) and index.
520 1 $a"In Reliable Reasoning, Gilbert Harman and Sanjeev Kulkarni argue that philosophy and cognitive science can benefit from statistical learning theory (SLT), the theory that lies behind recent advances in machine learning. The philosophical problem of induction, for example, is in part about the reliability of inductive reasoning, where the reliability of a method is measured by its statistically expected percentage of errors - a central topic of SLT." "After discussing philosophical attempts to evade the problem of induction, Harman and Kulkarni provide an account of the basic framework of SLT and its implications for inductive reasoning. They explain the Vapnik-Chervonenkis (VC) dimension of a set of hypotheses and distinguish two kinds of inductive reasoning, describing fundamental results about the power and limits of those methods in terms of the VC dimension of the hypotheses being considered. The authors discuss various topics in machine learning, including nearest-neighbor methods, neural networks, and support vector machines. Finally, they describe transductive reasoning and offer possible new models of human reasoning suggested by developments in SLT."--BOOK JACKET.
650 0 $aReasoning.$0http://id.loc.gov/authorities/subjects/sh85111790
650 0 $aReliability.$0http://id.loc.gov/authorities/subjects/sh85112510
650 0 $aInduction (Logic)$0http://id.loc.gov/authorities/subjects/sh85065805
650 0 $aComputational learning theory.$0http://id.loc.gov/authorities/subjects/sh94004662
700 1 $aKulkarni, Sanjeev.$0http://id.loc.gov/authorities/names/n2006078679
830 0 $aJean Nicod lectures ;$v2007.$0http://id.loc.gov/authorities/names/n94014128
856 41 $3Table of contents only$uhttp://www.loc.gov/catdir/toc/ecip073/2006033527.html
852 00 $bglx$hBC177$i.H377 2007
852 00 $bbar$hBC177$i.H377 2007