




版權(quán)說(shuō)明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)
文檔簡(jiǎn)介
Adversarial
Examples姜育剛,馬興軍,吳祖煊
1.
Deep
Neural
Networks
2.
Explainable
Machine
LearningRecap:
week
2Principles
and
MethodologiesLearning
DynamicsThe
Learned
ModelInferenceGeneralizationRobustness
to
Common
CorruptionsThis
Week
1.
Adversarial
Examples
2.
Adversarial
Attacks
3.
Adversarial
Vulnerability
UnderstandingMachine
Learning
Is
EverywhereMedicineandBiologySecurityandDefenseAutonomousVehicleIoTFinancialSystemMachineLearningMediaandEntertainmentCriticalInfrastructureBeat
Humans
on
Many
TasksSpeechRecognitionBaiduDeepSpeech2:End-to-endDeepLearningforEnglishandMandarinSpeechRecognitionEnglishandMandarinspeechrecognitionTransitionfromEnglishtoMandarinmadesimplerbyend-to-endDLNofeatureengineeringorMandarin-specificsrequiredMoreaccuratethanhumansErrorrate3.7%vs.4%forhumantestshttp://svail.github.io/mandarin//pdf/1512.02595.pdfOutperform
Human
on
Many
TasksStrategicGamesAlphaGo:
FirstComputerProgramtoBeataHumanGoProfessionalTrainingDNNs:3weeks,340milliontrainingstepson50GPUsPlay:Asynchronousmulti-threadedsearchSimulationsonCPUs,policyandvalueDNNsinparallelonGPUsSinglemachine:40searchthreads,48CPUs,and8GPUsDistributedversion:40searchthreads,1202CPUsand176GPUsOutcome:BeatbothEuropeanandWorldGochampionsinbestof5matches/nature/journal/v529/n7587/full/nature16961.htmlOutperform
Human
on
Many
TasksStrategicGamesAlphaGo:
FirstComputerProgramtoBeataHumanGoProfessionalTrainingDNNs:3weeks,340milliontrainingstepson50GPUsPlay:Asynchronousmulti-threadedsearchSimulationsonCPUs,policyandvalueDNNsinparallelonGPUsSinglemachine:40searchthreads,48CPUs,and8GPUsDistributedversion:40searchthreads,1202CPUsand176GPUsOutcome:BeatbothEuropeanandWorldGochampionsinbestof5matches/nature/journal/v529/n7587/full/nature16961.htmlOutperform
Human
on
Many
TasksLarge-scale
Image
RecognitionDALL·E2AlphaFold
V2LargeLanguageModel(LLM):ChatGPT
LargeMultimodelModel:GPT-4OpenAI在2023年3月發(fā)布的多模態(tài)對(duì)話大模型,能夠接受圖像和文本輸入,并輸出文本,具有超出ChatGPT的圖文理解能力、運(yùn)算能力、代碼生成能力、以及很多專業(yè)考試能力。參數(shù)量:1萬(wàn)億基礎(chǔ)模型:GPT-4訓(xùn)練數(shù)據(jù):
在GPT-3.5、ChatGPT基礎(chǔ)之上增加了多模態(tài)數(shù)據(jù)、更多的人工標(biāo)注數(shù)據(jù)等等Outperform
Human
on
Many
TasksImageRecognitionGoogLeNet:/people/karpathy/ilsvrc/LabrapoodleorFriedchickenSheepdogorMopBarnowlorAppleRawchickenorDonaldTrumpParrotorGuacamoleVulnerabilities
of
DNNsDog,82%confidenceOstrich,98%confidenceVulnerabilities
of
DNNsDog,82%confidenceAdversarial
ExamplesSzegedyC,ZarembaW,SutskeverI,etal.Intriguingpropertiesofneuralnetworks[J].ICLR
2014.GoodfellowIJ,ShlensJ,SzegedyC.Explainingandharnessingadversarialexamples[J].ICLR
2015.Small
perturbations
can
fool
DNNsAdversarial
AttackSzegedyC,ZarembaW,SutskeverI,etal.Intriguingpropertiesofneuralnetworks[J].ICLR
2014.GoodfellowIJ,ShlensJ,SzegedyC.Explainingandharnessingadversarialexamples[J].ICLR
2015.
DNN
Training:
Adversarial
Attack:
Misclassification
test
time
attack
Adversarial
Attack123Characteristics
of
Adversarial
ExamplesSmallImperceptibleHiddenTransferUniversalAdversarial
ExamplesExample
AttacksPerturbationsaresmall,imperceptibletohumaneyes.Adversarialexamplesareeasytogenerateandtransferacrossmodels.Maetal.,
“UnderstandingAdversarialAttacksonDeepLearningBasedMedicalImageAnalysisSystems”,Pattern
Recognition,
2021.BenignNevus,73%confidenceAdversarialnoiseMalignantNevus,89%confidenceExample
AttacksCleanvideoframes:
Correct
ClassAdversarial
video:
Wrong
ClassJiangetal.,
“Black-boxAdversarialAttacksonVideoRecognitionModels”,ACMMM,
2019.Example
AttacksEykholt,Kevin,etal.“Robustphysical-worldattacksondeeplearningvisualclassification.”
CVPR,2018.Physical-world
attacks
against
traffic
signsScience
Museum
at
LondonStop
signs
recognized
as
45km
speed
limitExample
AttacksAthalye,Anish,etal."Synthesizingrobustadversarialexamples."
ICML,2018.3D
printed
turtle
recognized
as
a
rifle
from
any
angle
Example
AttacksBrown,TomB.,etal."Adversarialpatch."
arXivpreprintarXiv:1712.09665
(2017).Adversarial
patch
makes
people
invisible
to
object
detection
(YOLO)Example
Attacks/Adversarial
attack
or
new
fashion?Example
AttacksXu,Kaidi,etal.“Adversarialt-shirt!evadingpersondetectorsinaphysicalworld.”
ECCV,
2020.Adversarial
t-shirt:
one
step
closer
to
real-world
attackExample
AttacksDuan
etal.AdversarialCamouflage:HidingPhysical-WorldAttacksWith
Natural
Styles.
CVPR,
2020.Tree
bark
->
street
signpeople+pikachu
t-shirt
->
dogCamouflage
adversarial
patterns
into
realistic
stylesExample
AttacksDuan,Ranjie,etal."Adversariallaserbeam:Effectivephysical-worldattacktodnnsinablink."
CVPR,2021Night
scene
adversarial
attack
with
laser
pointerExample
AttacksCao,Yulong,etal."Invisibleforbothcameraandlidar:Securityofmulti-sensorfusionbasedperceptioninautonomousdrivingunderphysical-worldattacks."
S&P,2021.Attacking
both
camera
and
lidar
using
adversarial
objectsExample
AttacksCarlini,Nicholas,andDavidWagner.“Audioadversarialexamples:Targetedattacksonspeech-to-text.”
S&PW,2018./code/audio_adversarial_examples/AdversarialMusic:RealworldAudioAdversaryagainstWake-wordDetectionSystem/watch?v=r4XXGDVs0f8Attacking
speech/command
recognition
modelsExample
AttacksQ&AAdversariesRibeiro
et
al.“SemanticallyequivalentadversarialrulesfordebuggingNLPmodels.”
ACL,
2018.Threats
to
AI
ApplicationsTransportationindustryTrickautonomousvehiclesintomisinterpretingstopsignsorspeedlimitCybersecurityindustryBypassAI-basedmalwaredetectiontoolsMedicalindustryForgemedicalconditionSmartHomeindustryFoolvoicecommandsFinancialIndustryTrickanomalyandfrauddetectionenginesDefinition
of
Adversarial
ExamplesNostandardcommunity-accepteddefinition“Adversarialexamplesareinputstomachinelearningmodelsthatanattackerhasintentionallydesignedtocausethemodeltomakeamistake”Goodfellow,Ian.“Defenseagainstthedarkarts:Anoverviewof
adversarialexamplesecurityresearchandfutureresearch
directions."
arXiv:1806.04169
(2018).TaxonomyofAttacksAttacktimingPoisoningattackEvasionattackAttacker’sgoalTargetedattackUntargetedattackAttacker’sknowledgeBlack-boxWhite-boxGray-boxUniversalityIndividualUniversalAttack
TimingEvasion(Causation)attackTesttimeattackChangeinputexamplePoisoningattackTrainingtimeattackChangeclassificationboundaryAttacker's
GoalTargetedattackCauseaninputtoberecognizedascomingfromaspecificclassUntargetedattackCauseaninputtoberecognizedasanyincorrectclassOstrich
Anyclass,exceptdog
Adversary's
KnowledgeWhite-boxattack:Attackerhasfullaccesstothemodel,includingmodeltype,modelarchitecture,valuesofparametersandtrainingweightsBlack-boxattack:AttackerhasnoknowledgeaboutthemodelunderattackRelyontransferabilityofadversarialexamplesGray-boxattack(Semi-black-boxattack)Attacker
may
know
some
hyperparameters
like
model
architectureUniversalityIndividualattackGeneratedifferentperturbationsforeachcleaninputUniversalattackOnlycreateauniversalperturbationforthewholedataset.Makeiteasiertodeployadversaryexamples.Moosavi-Dezfooli,Seyed-Mohsen,etal.“Universaladversarialperturbations.”
CVPR
2017.A
Brief
History
of
Adversarial
Machine
LearningBiggioetal.“Evasionattacksagainstmachinelearningattesttime.”;
Szegedy,Christian,etal."Intriguingpropertiesofneuralnetworks."
2014年Goodfellow等人提出快速單步攻擊FGSM及對(duì)抗訓(xùn)練2015年簡(jiǎn)單檢測(cè)方法(PCA)和對(duì)抗訓(xùn)練方法2016年提出對(duì)抗訓(xùn)練的min-max優(yōu)化框架2017年大量的對(duì)抗樣本檢測(cè)方法和攻擊方法(BIM、C&W)、10種檢測(cè)方法被攻破2019年TRADES及大量其他對(duì)抗訓(xùn)練方法、第一篇Science文章2018年物理世界攻擊方法、檢測(cè)方法升級(jí)、PGD攻擊與對(duì)抗訓(xùn)練、9種防御方法被攻破2020年AutoAttack攻擊、Fast對(duì)抗訓(xùn)練2021年增大模型、增加數(shù)據(jù)的對(duì)抗訓(xùn)練、領(lǐng)域延伸2022年尚未解決的問(wèn)題,攻擊越來(lái)越多,防御越來(lái)越難2013年Biggio等人與Szegedy等人發(fā)現(xiàn)對(duì)抗樣本W(wǎng)hite-box
Attacks單步攻擊:FastGradientSign
Method(FGSM)
(Goodfellowetal.2014):多步攻擊:IterativeMethods(BIM,
PGD),(Kurakinetal.2016;
Madry
et
al.
2018):ProjectedGradientDescent
(PGD):
strongest
first-order
attack.基于優(yōu)化的攻擊:C&W
attack(Carlini&Wagner2017):
CW
attack
was
the
strongest
attack
集成攻擊:AutoAttack
(Croceetal.2020):
current
strongest
attackWhy
Adversarial
Examples
Exist?Non-linear
Explanation1stlayerViewing
DNNas
a
sequence
of
transformed
spaces:10thlayer20thlayerSzegedyC,ZarembaW,SutskeverI,etal.Intriguingpropertiesofneuralnetworks[J].ICLR
2014;
Ma
et
al.
CharacterizingAdversarialSubspaceUsingLocalIntrinsicDimensionality.
ICLR
2018High
dimensional
non-linearexplanation:
Non-lineartransformationsleadstotheexistenceofsmall“pockets”inthedeepspace:Regionsoflowprobability(notnaturallyoccurring).Densely
scatteredregions.Continuousregions.Closetonormaldatasubspace.Linear
ExplanationViewing
DNNas
a
stack
of
linear
operations:
GoodfellowIJ,ShlensJ,SzegedyC.Explainingandharnessingadversarialexamples[J].ICLR
2015.VulnerabilityIncreaseswithIntrinsicDimensionalityAmsaleget
al.
TheVulnerabilityofLearningtoAdversarialPerturbationIncreaseswithIntrinsicDimensionality.
WIFS,
2017Y-axis:
the
minimum
adversarial
noise
required
to
subvert
a
KNN
classifierX-axis:
LID
valuesRed
curve:
theoretical
boundCIFAR-10ImageNetInsufficient
Training
Data
Insufficient
Training
Data
SizeofthetrainingdatasetAccuracyonitsowntestdatasetAccuracyonthetestdatasetwith4×104pointsAccuracyontheboundarydataset8010092.760.880099.097.474.9800099.599.694.18000099.999.998.9SizeofthetrainingdatasetAccuracyonitsowntestdatasetAccuracyonthetestdatasetwith4×104pointsAccuracyontheboundarydataset8010096.370.180099.899.085.7800099.999.897.38000099.9899.9899.5Unnecessary
Features
Wanget
al."Atheoreticalframeworkforrobustnessof(deep)classifiersagainstadversarialexamples."
arXiv:1612.00334
(2016).Unnecessary
FeaturesAdversarialsamplescanbefarawayfr
溫馨提示
- 1. 本站所有資源如無(wú)特殊說(shuō)明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
- 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
- 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁(yè)內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
- 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
- 5. 人人文庫(kù)網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
- 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
- 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。
最新文檔
- 2025年蕪黃高速公路合同管理實(shí)施細(xì)則
- 地磚地板知識(shí)培訓(xùn)班課件
- 會(huì)員管理系統(tǒng)托管合同
- 媒體廣告推廣合同
- 客戶滿意度反饋合同
- 2025年高層領(lǐng)導(dǎo)(管理學(xué)原理)技能知識(shí)考試題庫(kù)與答案
- 2025年監(jiān)獄罪犯職業(yè)技術(shù)培訓(xùn)中心招聘面試模擬題及答案
- 自動(dòng)化流程自動(dòng)化系統(tǒng)集成模板
- 網(wǎng)絡(luò)平臺(tái)內(nèi)容制作統(tǒng)一標(biāo)準(zhǔn)工具
- 2025濟(jì)南市家具購(gòu)買合同官方范本
- ks-9000氣體報(bào)警控制器使用說(shuō)明書
- 《SPC統(tǒng)計(jì)過(guò)程控制》課件
- GB/T 14153-1993硬質(zhì)塑料落錘沖擊試驗(yàn)方法通則
- (完整版)人教版八年級(jí)下冊(cè)《道德與法治》期末測(cè)試卷及答案【新版】
- 并購(gòu)貸款業(yè)務(wù)培訓(xùn)
- 北京大學(xué)人民醫(yī)院-醫(yī)療知情同意書匯編
- 建設(shè)集團(tuán)有限公司安全生產(chǎn)管理制度匯編
- 牙體牙髓病最全課件
- 交通信號(hào)控制系統(tǒng)檢驗(yàn)批質(zhì)量驗(yàn)收記錄表
- 疫苗運(yùn)輸溫度記錄表
- 各國(guó)鋼材-合金牌號(hào)對(duì)照表
評(píng)論
0/150
提交評(píng)論