It looks like you're offline.
Open Library logo
additional options menu

MARC Record from marc_columbia

Record ID marc_columbia/Columbia-extract-20221130-034.mrc:66622913:3976
Source marc_columbia
Download Link /show-records/marc_columbia/Columbia-extract-20221130-034.mrc:66622913:3976?format=raw

LEADER: 03976cam a2200637 i 4500
001 16778437
005 20220930161708.0
008 200626s2020 nyu b 001 0 eng
010 $a 2020029036
035 $a(OCoLC)on1137850003
040 $aDLC$beng$erda$cDLC$dOCLCO$dOCLCF$dUAP$dYDX$dCPP$dTCH$dVP@$dAJB$dCTU$dGYG$dWIQ$dOCLCO$dSAD
019 $a1197966344$a1224596061$a1228811531
020 $a9780393635829$qhardcover
020 $a0393635821$qhardcover
020 $z9780393635836$qelectronic publication
035 $a(OCoLC)1137850003$z(OCoLC)1197966344$z(OCoLC)1224596061$z(OCoLC)1228811531
042 $apcc
050 00 $aQ334.7$b.C47 2020
082 04 $a006.3101/9$223
082 00 $a174/.90063$223
049 $aZCUA
100 1 $aChristian, Brian,$d1984-$eauthor.
245 14 $aThe alignment problem :$bmachine learning and human values /$cBrian Christian.
246 30 $aMachine learning and human values
250 $aFirst edition.
264 1 $aNew York, NY :$bW.W. Norton & Company,$c[2020]
300 $axii, 476 pages ;$c25 cm
336 $atext$btxt$2rdacontent
337 $aunmediated$bn$2rdamedia
338 $avolume$bnc$2rdacarrier
386 $mGender group:$ngdr$aMen$2lcdgt
386 $mNationality/regional group:$nnat$aCalifornians$2lcdgt
386 $mOccupational/field of activity group:$nocc$aScholars$2lcdgt
504 $aIncludes bibliographical references (pages [401]-451) and index.
505 00 $tProphecy.$tRepresentation --$tFairness --$tTransparency --$tAgency.$tReinforcement --$tShaping --$tCuriosity --$tNormativity.$tImitation --$tInference --$tUncertainty.
520 $a"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--Provided by publisher.
650 0 $aArtificial intelligence$xMoral and ethical aspects.
650 0 $aArtificial intelligence$xSocial aspects.
650 0 $aMachine learning$xSafety measures.
650 0 $aSoftware failures.
650 0 $aSocial values.
650 2 $aSocial Values$0(DNLM)D012945
650 6 $aIntelligence artificielle$0(CaQQLa)201-0008626$xAspect moral.$0(CaQQLa)201-0374162
650 6 $aIntelligence artificielle$0(CaQQLa)201-0008626$xAspect social.$0(CaQQLa)201-0374080
650 6 $aApprentissage automatique$0(CaQQLa)201-0131435$xSécurité$0(CaQQLa)201-0373949$xMesures.$0(CaQQLa)201-0373949
650 6 $aBogues (Informatique)$0(CaQQLa)000327903
650 6 $aValeurs sociales.$0(CaQQLa)201-0018522
650 7 $aSCIENCE / Philosophy & Social Aspects.$2bisacsh
650 7 $aCOMPUTERS / Artificial Intelligence / General.$2bisacsh
650 7 $aCOMPUTERS / Social Aspects.$2bisacsh
650 7 $aArtificial intelligence$xMoral and ethical aspects.$2fast$0(OCoLC)fst00817273
650 7 $aArtificial intelligence$xSocial aspects.$2fast$0(OCoLC)fst00817279
650 7 $aSocial values.$2fast$0(OCoLC)fst01123424
650 7 $aSoftware failures.$2fast$0(OCoLC)fst01124200
655 4 $aNonfiction.
852 0 $bsci$hQ334.7$i.C47 2020