ãã£ãŒãã©ãŒãã³ã°
Tip
AWSãããã³ã°ãåŠã³ãå®è·µããïŒ
HackTricks Training AWS Red Team Expert (ARTE)
GCPãããã³ã°ãåŠã³ãå®è·µããïŒHackTricks Training GCP Red Team Expert (GRTE)
Azureãããã³ã°ãåŠã³ãå®è·µããïŒ
HackTricks Training Azure Red Team Expert (AzRTE)
HackTricksããµããŒããã
- ãµãã¹ã¯ãªãã·ã§ã³ãã©ã³ã確èªããŠãã ããïŒ
- **ð¬ Discordã°ã«ãŒããŸãã¯ãã¬ã°ã©ã ã°ã«ãŒãã«åå ããããTwitter ðŠ @hacktricks_liveããã©ããŒããŠãã ããã
- HackTricksããã³HackTricks Cloudã®GitHubãªããžããªã«PRãæåºããŠãããã³ã°ããªãã¯ãå ±æããŠãã ããã
ãã£ãŒãã©ãŒãã³ã°
ãã£ãŒãã©ãŒãã³ã°ã¯ãè€æ°ã®å±€ãæã€ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒæ·±å±€ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒã䜿çšããŠããŒã¿ã®è€éãªãã¿ãŒã³ãã¢ãã«åããæ©æ¢°åŠç¿ã®ãµãã»ããã§ããã³ã³ãã¥ãŒã¿ããžã§ã³ãèªç¶èšèªåŠçãé³å£°èªèãªã©ãããŸããŸãªåéã§é©ç°çãªæåãåããŠããŸãã
ãã¥ãŒã©ã«ãããã¯ãŒã¯
ãã¥ãŒã©ã«ãããã¯ãŒã¯ã¯ããã£ãŒãã©ãŒãã³ã°ã®åºæ¬æ§æèŠçŽ ã§ãããããã¯ãå±€ã«çµç¹ãããçžäºæ¥ç¶ãããããŒãïŒãã¥ãŒãã³ïŒã§æ§æãããŠããŸããåãã¥ãŒãã³ã¯å ¥åãåãåããéã¿ä»ãåãé©çšããæŽ»æ§å颿°ãéããŠçµæãåºåããŸããå±€ã¯ä»¥äžã®ããã«åé¡ã§ããŸãïŒ
- å ¥åå±€ïŒå ¥åããŒã¿ãåãåãæåã®å±€ã
- é ãå±€ïŒå ¥åããŒã¿ã«å€æãè¡ãäžéå±€ãé ãå±€ã®æ°ãåå±€ã®ãã¥ãŒãã³ã®æ°ã¯ç°ãªãå Žåããããç°ãªãã¢ãŒããã¯ãã£ãçã¿åºããŸãã
- åºåå±€ïŒãããã¯ãŒã¯ã®åºåãçæããæçµå±€ã§ãåé¡ã¿ã¹ã¯ã«ãããã¯ã©ã¹ç¢ºçãªã©ãå«ã¿ãŸãã
掻æ§å颿°
ãã¥ãŒãã³ã®å±€ãå
¥åããŒã¿ãåŠçããéãåãã¥ãŒãã³ã¯å
¥åã«éã¿ãšãã€ã¢ã¹ãé©çšããŸãïŒz = w * x + bïŒãããã§ w ã¯éã¿ãx ã¯å
¥åãb ã¯ãã€ã¢ã¹ã§ãããã¥ãŒãã³ã®åºåã¯ãã¢ãã«ã«éç·åœ¢æ§ãå°å
¥ããããã«æŽ»æ§å颿°ãééããŸãããã®æŽ»æ§å颿°ã¯ã次ã®ãã¥ãŒãã³ããæŽ»æ§åãããã¹ãããã©ã®çšåºŠããã瀺ããŸããããã«ããããããã¯ãŒã¯ã¯ããŒã¿å
ã®è€éãªãã¿ãŒã³ãé¢ä¿ãåŠç¿ããä»»æã®é£ç¶é¢æ°ãè¿äŒŒã§ããããã«ãªããŸãã
ãããã£ãŠã掻æ§å颿°ã¯ãã¥ãŒã©ã«ãããã¯ãŒã¯ã«éç·åœ¢æ§ãå°å ¥ããããŒã¿å ã®è€éãªé¢ä¿ãåŠç¿ã§ããããã«ããŸããäžè¬çãªæŽ»æ§å颿°ã«ã¯ä»¥äžãå«ãŸããŸãïŒ
- ã·ã°ã¢ã€ãïŒå ¥åå€ã0ãš1ã®ç¯å²ã«ãããã³ã°ããäž»ã«äºé åé¡ã«äœ¿çšãããŸãã
- ReLUïŒRectified Linear UnitïŒïŒå ¥åãæ£ã®å Žåã¯ãã®ãŸãŸåºåããããã§ãªãå Žåã¯ãŒããåºåããŸããã·ã³ãã«ããšæ·±å±€ãããã¯ãŒã¯ã®ãã¬ãŒãã³ã°ã«ããã广çãªç¹æ§ããåºã䜿çšãããŠããŸãã
- TanhïŒå ¥åå€ã-1ãš1ã®ç¯å²ã«ãããã³ã°ããäž»ã«é ãå±€ã§äœ¿çšãããŸãã
- SoftmaxïŒçã®ã¹ã³ã¢ã確çã«å€æããäž»ã«å€ã¯ã©ã¹åé¡ã®åºåå±€ã§äœ¿çšãããŸãã
ããã¯ãããã²ãŒã·ã§ã³
ããã¯ãããã²ãŒã·ã§ã³ã¯ããã¥ãŒãã³éã®æ¥ç¶ã®éã¿ã調æŽããããšã«ãã£ãŠãã¥ãŒã©ã«ãããã¯ãŒã¯ããã¬ãŒãã³ã°ããããã«äœ¿çšãããã¢ã«ãŽãªãºã ã§ããããã¯ãæå€±é¢æ°ã®åŸé ãåéã¿ã«å¯ŸããŠèšç®ããæå€±ãæå°åããããã«åŸé ã®éæ¹åã«éã¿ãæŽæ°ããããšã«ãã£ãŠæ©èœããŸããããã¯ãããã²ãŒã·ã§ã³ã«é¢ããã¹ãããã¯ä»¥äžã®éãã§ãïŒ
- ãã©ã¯ãŒããã¹ïŒå ¥åãå±€ãéããŠæž¡ããæŽ»æ§å颿°ãé©çšããŠãããã¯ãŒã¯ã®åºåãèšç®ããŸãã
- æå€±èšç®ïŒäºæž¬åºåãšçã®ã¿ãŒã²ãããšã®éã®æå€±ïŒèª€å·®ïŒãæå€±é¢æ°ïŒäŸïŒååž°ã®å¹³åäºä¹èª€å·®ãåé¡ã®ã¯ãã¹ãšã³ããããŒïŒã䜿çšããŠèšç®ããŸãã
- ããã¯ã¯ãŒããã¹ïŒåŸ®åæ³åã䜿çšããŠãåéã¿ã«å¯Ÿããæå€±ã®åŸé ãèšç®ããŸãã
- éã¿æŽæ°ïŒæé©åã¢ã«ãŽãªãºã ïŒäŸïŒç¢ºççåŸé éäžæ³ãAdamïŒã䜿çšããŠæå€±ãæå°åããããã«éã¿ãæŽæ°ããŸãã
ç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒCNNïŒ
ç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒCNNïŒã¯ãç»åãªã©ã®ã°ãªããç¶ããŒã¿ãåŠçããããã«èšèšãããç¹æ®ãªã¿ã€ãã®ãã¥ãŒã©ã«ãããã¯ãŒã¯ã§ãããããã¯ãç¹åŸŽã®ç©ºéçéå±€ãèªåçã«åŠç¿ããèœåã«ãããã³ã³ãã¥ãŒã¿ããžã§ã³ã¿ã¹ã¯ã§ç¹ã«å¹æçã§ãã
CNNã®äž»ãªæ§æèŠçŽ ã«ã¯ä»¥äžãå«ãŸããŸãïŒ
- ç³ã¿èŸŒã¿å±€ïŒåŠç¿å¯èœãªãã£ã«ã¿ãŒïŒã«ãŒãã«ïŒã䜿çšããŠå ¥åããŒã¿ã«ç³ã¿èŸŒã¿æäœãé©çšãã屿çãªç¹åŸŽãæœåºããŸããåãã£ã«ã¿ãŒã¯å ¥åã®äžãã¹ã©ã€ããããããç©ãèšç®ããŠç¹åŸŽããããçæããŸãã
- ããŒãªã³ã°å±€ïŒç¹åŸŽãããã®ç©ºéçæ¬¡å ãæžå°ãããªããéèŠãªç¹åŸŽãä¿æããŸããäžè¬çãªããŒãªã³ã°æäœã«ã¯æå€§ããŒãªã³ã°ãšå¹³åããŒãªã³ã°ããããŸãã
- å šçµåå±€ïŒ1ã€ã®å±€ã®ãã¹ãŠã®ãã¥ãŒãã³ã次ã®å±€ã®ãã¹ãŠã®ãã¥ãŒãã³ã«æ¥ç¶ããåŸæ¥ã®ãã¥ãŒã©ã«ãããã¯ãŒã¯ã«äŒŒãŠããŸãããããã®å±€ã¯éåžžãåé¡ã¿ã¹ã¯ã®ããã«ãããã¯ãŒã¯ã®æåŸã«äœ¿çšãããŸãã
CNNã®**ç³ã¿èŸŒã¿å±€**å
ã§ã¯ã以äžã®ããã«åºå¥ã§ããŸãïŒ
- åæç³ã¿èŸŒã¿å±€ïŒçã®å ¥åããŒã¿ïŒäŸïŒç»åïŒãåŠçãããšããžããã¯ã¹ãã£ãªã©ã®åºæ¬çãªç¹åŸŽãç¹å®ããã®ã«åœ¹ç«ã€æåã®ç³ã¿èŸŒã¿å±€ã
- äžéç³ã¿èŸŒã¿å±€ïŒåæå±€ã«ãã£ãŠåŠç¿ãããç¹åŸŽãåºã«æ§ç¯ããããããã¯ãŒã¯ãããè€éãªãã¿ãŒã³ã衚çŸãåŠç¿ã§ããããã«ããåŸç¶ã®ç³ã¿èŸŒã¿å±€ã
- æçµç³ã¿èŸŒã¿å±€ïŒå šçµåå±€ã®åã®æåŸã®ç³ã¿èŸŒã¿å±€ã§ãé«ã¬ãã«ã®ç¹åŸŽããã£ããã£ããåé¡ã®ããã«ããŒã¿ãæºåããŸãã
Tip
CNNã¯ãã°ãªããç¶ããŒã¿å ã®ç¹åŸŽã®ç©ºéçéå±€ãåŠç¿ããéã¿ã®å ±æãéããŠãã©ã¡ãŒã¿ã®æ°ãæžå°ãããèœåã«ãããç»ååé¡ãç©äœæ€åºãç»åã»ã°ã¡ã³ããŒã·ã§ã³ã¿ã¹ã¯ã«ç¹ã«å¹æçã§ãã ããã«ã飿¥ããŒã¿ïŒãã¯ã»ã«ïŒãé ãã®ãã¯ã»ã«ãããé¢é£ããŠããå¯èœæ§ãé«ããšããç¹åŸŽã®å±ææ§ååãæ¯æããããŒã¿ã§ããè¯ãæ©èœããŸãããããã¹ãã®ãããªä»ã®ã¿ã€ãã®ããŒã¿ã§ã¯ããã§ã¯ãªããããããŸããã ããã«ãCNNãè€éãªç¹åŸŽãç¹å®ã§ããäžæ¹ã§ã空éçæèãé©çšã§ããªãããšã«æ³šæããŠãã ãããã€ãŸããç»åã®ç°ãªãéšåã§èŠã€ãã£ãåãç¹åŸŽã¯åãã«ãªããŸãã
CNNãå®çŸ©ããäŸ
ããã§ã¯ããµã€ãº48x48ã®RGBç»åã®ããããããŒã¿ã»ãããšããŠäœ¿çšããç¹åŸŽãæœåºããããã«ç³ã¿èŸŒã¿å±€ãšæå€§ããŒãªã³ã°ã䜿çšããåé¡ã®ããã«å šçµåå±€ãç¶ããç³ã¿èŸŒã¿ãã¥ãŒã©ã«ãããã¯ãŒã¯ïŒCNNïŒãPyTorchã§å®çŸ©ããæ¹æ³ã«ã€ããŠèª¬æããŸãã
ãããPyTorchã§1ã€ã®ç³ã¿èŸŒã¿å±€ãå®çŸ©ããæ¹æ³ã§ãïŒself.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=1)ã
-
in_channelsïŒå ¥åãã£ãã«ã®æ°ãRGBç»åã®å Žåãããã¯3ïŒåè²ãã£ãã«ã®1ã€ïŒã§ããã°ã¬ãŒã¹ã±ãŒã«ç»åãæ±ãå Žåãããã¯1ã«ãªããŸãã -
out_channelsïŒç³ã¿èŸŒã¿å±€ãåŠç¿ããåºåãã£ãã«ïŒãã£ã«ã¿ãŒïŒã®æ°ãããã¯ã¢ãã«ã¢ãŒããã¯ãã£ã«åºã¥ããŠèª¿æŽã§ãããã€ããŒãã©ã¡ãŒã¿ã§ãã -
kernel_sizeïŒç³ã¿èŸŒã¿ãã£ã«ã¿ãŒã®ãµã€ãºãäžè¬çãªéžæè¢ã¯3x3ã§ãããã¯ãã£ã«ã¿ãŒãå ¥åç»åã®3x3ã®é åãã«ããŒããããšãæå³ããŸããããã¯ãin_channelsããout_channelsãçæããããã«äœ¿çšããã3Ã3Ã3ã®ã«ã©ãŒã¹ã¿ã³ãã®ãããªãã®ã§ãïŒ
- ãã®3Ã3Ã3ã®ã¹ã¿ã³ããç»åãã¥ãŒãã®å·Šäžé ã«çœ®ããŸãã
- åéã¿ããã®äžã®ãã¯ã»ã«ã§æãç®ãããã¹ãŠãå ç®ãããã€ã¢ã¹ãå ããŸã â 1ã€ã®æ°å€ãåŸãããŸãã
- ãã®æ°å€ãäœçœ®(0, 0)ã®ç©ºçœãããã«æžã蟌ã¿ãŸãã
- ã¹ã¿ã³ããå³ã«1ãã¯ã»ã«ã¹ã©ã€ãããïŒã¹ãã©ã€ã=1ïŒã48Ã48ã®ã°ãªããå šäœãåãããŸã§ç¹°ãè¿ããŸãã
paddingïŒå ¥åã®ååŽã«è¿œå ããããã¯ã»ã«ã®æ°ãããã£ã³ã°ã¯å ¥åã®ç©ºéçæ¬¡å ãä¿æããã®ã«åœ¹ç«ã¡ãåºåãµã€ãºãããå¶åŸ¡ã§ããããã«ããŸããããšãã°ã3x3ã®ã«ãŒãã«ãš48x48ãã¯ã»ã«ã®å ¥åã§ãããã£ã³ã°ã1ã®å Žåãç³ã¿èŸŒã¿æäœã®åŸã«åºåãµã€ãºã¯åãïŒ48x48ïŒã®ãŸãŸã«ãªããŸããããã¯ãããã£ã³ã°ãå ¥åç»åã®åšãã«1ãã¯ã»ã«ã®ããŒããŒã远å ããã«ãŒãã«ããšããžãã¹ã©ã€ãã§ããããã«ããããã§ãã
次ã«ããã®å±€ã®åŠç¿å¯èœãªãã©ã¡ãŒã¿ã®æ°ã¯ïŒ
- (3x3x3ïŒã«ãŒãã«ãµã€ãºïŒ + 1ïŒãã€ã¢ã¹ïŒ) x 32ïŒout_channelsïŒ = 896ã®åŠç¿å¯èœãªãã©ã¡ãŒã¿ã§ãã
åç³ã¿èŸŒã¿å±€ã®æ©èœã¯ãå ¥åã®ç·åœ¢å€æãåŠç¿ããããšã§ãããããã¯æ¬¡ã®æ¹çšåŒã§è¡šãããŸãïŒ
Y = f(W * X + b)
Wã¯éã¿è¡åïŒåŠç¿ããããã£ã«ã¿ã3x3x3 = 27ãã©ã¡ãŒã¿ïŒãbã¯ååºåãã£ãã«ã«å¯ŸããŠ+1ã®ãã€ã¢ã¹ãã¯ãã«ã§ãã
self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, padding=1)ã®åºåã¯ã圢ç¶ã(batch_size, 32, 48, 48)ã®ãã³ãœã«ã«ãªããŸããããã¯ã32ãçæãããæ°ãããã£ãã«ã®æ°ã§ããµã€ãºã¯48x48ãã¯ã»ã«ã§ãã
次ã«ããã®ç³ã¿èŸŒã¿å±€ãå¥ã®ç³ã¿èŸŒã¿å±€ã«æ¥ç¶ããããšãã§ããŸã: self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=1)ã
ããã«ããã(32x3x3ïŒã«ãŒãã«ãµã€ãºïŒ + 1ïŒãã€ã¢ã¹ïŒ) x 64ïŒåºåãã£ãã«ïŒ = 18,496ã®åŠç¿å¯èœãªãã©ã¡ãŒã¿ã远å ãããåºåã®åœ¢ç¶ã¯(batch_size, 64, 48, 48)ã«ãªããŸãã
ãã©ã¡ãŒã¿ã®æ°ã¯ãå远å ã®ç³ã¿èŸŒã¿å±€ã§æ¥éã«å¢å ããŸããç¹ã«åºåãã£ãã«ã®æ°ãå¢ããã«ã€ããŠã
ããŒã¿ã®äœ¿çšéãå¶åŸ¡ããããã®1ã€ã®ãªãã·ã§ã³ã¯ãåç³ã¿èŸŒã¿å±€ã®åŸã«æå€§ããŒãªã³ã°ã䜿çšããããšã§ããæå€§ããŒãªã³ã°ã¯ç¹åŸŽãããã®ç©ºéçæ¬¡å ãæžå°ãããéèŠãªç¹åŸŽãä¿æããªãããã©ã¡ãŒã¿ã®æ°ãšèšç®ã®è€éããæžå°ãããã®ã«åœ¹ç«ã¡ãŸãã
ããã¯æ¬¡ã®ããã«å®£èšã§ããŸã: self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)ãããã¯åºæ¬çã«2x2ãã¯ã»ã«ã®ã°ãªããã䜿çšããåã°ãªããããæå€§å€ãååŸããŠç¹åŸŽãããã®ãµã€ãºãååã«æžå°ãããããšã瀺ããŸããããã«ãstride=2ã¯ãããŒãªã³ã°æäœã2ãã¯ã»ã«ãã€ç§»åããããšãæå³ãããã®å ŽåãããŒãªã³ã°é åéã®éè€ãé²ããŸãã
ãã®ããŒãªã³ã°å±€ã䜿çšãããšãæåã®ç³ã¿èŸŒã¿å±€ã®åŸã®åºå圢ç¶ã¯ãself.conv2ã®åºåã«self.pool1ãé©çšããåŸã(batch_size, 64, 24, 24)ã«ãªããŸããããã¯ãåã®å±€ã®ãµã€ãºã1/4ã«æžå°ãããŸãã
Tip
ç³ã¿èŸŒã¿å±€ã®åŸã«ããŒãªã³ã°ãè¡ãããšã¯ãç¹åŸŽãããã®ç©ºéçæ¬¡å ãæžå°ãããããã«éèŠã§ããããã«ããããã©ã¡ãŒã¿ã®æ°ãšèšç®ã®è€éããå¶åŸ¡ããªãããåæãã©ã¡ãŒã¿ãéèŠãªç¹åŸŽãåŠç¿ããã®ã«åœ¹ç«ã¡ãŸãã ããŒãªã³ã°å±€ã®åã®ç³ã¿èŸŒã¿ããå ¥åããŒã¿ããç¹åŸŽãæœåºããæ¹æ³ãšããŠèŠãããšãã§ããŸãïŒç·ããšããžã®ããã«ïŒããã®æ å ±ã¯ããŒã«ãããåºåã«ãååšããŸãããæ¬¡ã®ç³ã¿èŸŒã¿å±€ã¯å ã®å ¥åããŒã¿ãèŠãããšãã§ãããããŒã«ãããåºåã®ã¿ãèŠãŸããããã¯ãåã®å±€ã®æ å ±ãæã€çž®å°çã§ãã éåžžã®é åºã§ã¯:
Conv â ReLU â Poolãå2Ã2ããŒãªã³ã°ãŠã£ã³ããŠã¯ãç¹åŸŽã®æŽ»æ§åïŒããšããžãååšãã/ããªããïŒãšå¯Ÿå³ããçã®ãã¯ã»ã«åŒ·åºŠã§ã¯ãããŸãããæãåŒ·ãæŽ»æ§åãä¿æããããšã¯ãæãé¡èãªèšŒæ ãä¿æããããšãæ¬åœã«æå³ããŸãã
ãã®åŸãå¿ èŠãªã ãã®ç³ã¿èŸŒã¿å±€ãšããŒãªã³ã°å±€ã远å ããããåºåããã©ããåããŠå šçµåå±€ã«äŸçµŠã§ããŸããããã¯ããããå ã®åãµã³ãã«ã®ããã«ãã³ãœã«ã1Dãã¯ãã«ã«å圢æããããšã§è¡ãããŸã:
x = x.view(-1, 64*24*24)
ãã®1Dãã¯ãã«ã¯ãåã®ç³ã¿èŸŒã¿å±€ãšããŒãªã³ã°å±€ã«ãã£ãŠçæããããã¹ãŠã®ãã¬ãŒãã³ã°ãã©ã¡ãŒã¿ãå«ãã§ãããæ¬¡ã®ããã«å šçµåå±€ãå®çŸ©ã§ããŸã:
self.fc1 = nn.Linear(64 * 24 * 24, 512)
åã®å±€ã®ãã©ããåãããåºåãåããããã512ã®é ããŠãããã«ãããã³ã°ããŸãã
ãã®å±€ã远å ãããã©ã¡ãŒã¿æ°ã¯ (64 * 24 * 24 + 1 (ãã€ã¢ã¹)) * 512 = 3,221,504 ã§ãããç³ã¿èŸŒã¿å±€ãšæ¯èŒããŠå€§å¹
ãªå¢å ã§ããããã¯ãå
šçµåå±€ã1ã€ã®å±€ã®ãã¹ãŠã®ãã¥ãŒãã³ã次ã®å±€ã®ãã¹ãŠã®ãã¥ãŒãã³ã«æ¥ç¶ããããã倧éã®ãã©ã¡ãŒã¿ãçããããã§ãã
æåŸã«ãæçµçãªã¯ã©ã¹ããžãããçæããããã®åºåå±€ã远å ã§ããŸãïŒ
self.fc2 = nn.Linear(512, num_classes)
ããã«ããã(512 + 1 (ãã€ã¢ã¹)) * num_classes ã®åŠç¿å¯èœãªãã©ã¡ãŒã¿ã远å ãããŸããããã§ãnum_classes ã¯åé¡ã¿ã¹ã¯ã®ã¯ã©ã¹æ°ã§ãïŒäŸïŒGTSRBããŒã¿ã»ããã®å Žåã¯43ïŒã
ããäžã€ã®äžè¬çãªææ³ã¯ãéåŠç¿ãé²ãããã«å šçµåå±€ã®åã«ããããã¢ãŠãå±€ã远å ããããšã§ããããã¯æ¬¡ã®ããã«è¡ãããšãã§ããŸãïŒ
self.dropout = nn.Dropout(0.5)
ãã®å±€ã¯ããã¬ãŒãã³ã°äžã«å ¥åãŠãããã®äžéšãã©ã³ãã ã«ãŒãã«èšå®ããŸããããã«ãããç¹å®ã®ãã¥ãŒãã³ãžã®äŸåãæžããããšã§ããªãŒããŒãã£ããã£ã³ã°ãé²ãã®ã«åœ¹ç«ã¡ãŸãã
CNN ã³ãŒãäŸ
import torch
import torch.nn as nn
import torch.nn.functional as F
class MY_NET(nn.Module):
def __init__(self, num_classes=32):
super(MY_NET, self).__init__()
# Initial conv layer: 3 input channels (RGB), 32 output channels, 3x3 kernel, padding 1
# This layer will learn basic features like edges and textures
self.conv1 = nn.Conv2d(
in_channels=3, out_channels=32, kernel_size=3, padding=1
)
# Output: (Batch Size, 32, 48, 48)
# Conv Layer 2: 32 input channels, 64 output channels, 3x3 kernel, padding 1
# This layer will learn more complex features based on the output of conv1
self.conv2 = nn.Conv2d(
in_channels=32, out_channels=64, kernel_size=3, padding=1
)
# Output: (Batch Size, 64, 48, 48)
# Max Pooling 1: Kernel 2x2, Stride 2. Reduces spatial dimensions by half (1/4th of the previous layer).
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
# Output: (Batch Size, 64, 24, 24)
# Conv Layer 3: 64 input channels, 128 output channels, 3x3 kernel, padding 1
# This layer will learn even more complex features based on the output of conv2
# Note that the number of output channels can be adjusted based on the complexity of the task
self.conv3 = nn.Conv2d(
in_channels=64, out_channels=128, kernel_size=3, padding=1
)
# Output: (Batch Size, 128, 24, 24)
# Max Pooling 2: Kernel 2x2, Stride 2. Reduces spatial dimensions by half again.
# Reducing the dimensions further helps to control the number of parameters and computational complexity.
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
# Output: (Batch Size, 128, 12, 12)
# From the second pooling layer, we will flatten the output to feed it into fully connected layers.
# The feature size is calculated as follows:
# Feature size = Number of output channels * Height * Width
self._feature_size = 128 * 12 * 12
# Fully Connected Layer 1 (Hidden): Maps flattened features to hidden units.
# This layer will learn to combine the features extracted by the convolutional layers.
self.fc1 = nn.Linear(self._feature_size, 512)
# Fully Connected Layer 2 (Output): Maps hidden units to class logits.
# Output size MUST match num_classes
self.fc2 = nn.Linear(512, num_classes)
# Dropout layer configuration with a dropout rate of 0.5.
# This layer is used to prevent overfitting by randomly setting a fraction of the input units to zero during training.
self.dropout = nn.Dropout(0.5)
def forward(self, x):
"""
The forward method defines the forward pass of the network.
It takes an input tensor `x` and applies the convolutional layers, pooling layers, and fully connected layers in sequence.
The input tensor `x` is expected to have the shape (Batch Size, Channels, Height, Width), where:
- Batch Size: Number of samples in the batch
- Channels: Number of input channels (e.g., 3 for RGB images)
- Height: Height of the input image (e.g., 48 for 48x48 images)
- Width: Width of the input image (e.g., 48 for 48x48 images)
The output of the forward method is the logits for each class, which can be used for classification tasks.
Args:
x (torch.Tensor): Input tensor of shape (Batch Size, Channels, Height, Width)
Returns:
torch.Tensor: Output tensor of shape (Batch Size, num_classes) containing the class logits.
"""
# Conv1 -> ReLU -> Conv2 -> ReLU -> Pool1 -> Conv3 -> ReLU -> Pool2
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool1(x)
x = self.conv3(x)
x = F.relu(x)
x = self.pool2(x)
# At this point, x has shape (Batch Size, 128, 12, 12)
# Flatten the output to feed it into fully connected layers
x = torch.flatten(x, 1)
# Apply dropout to prevent overfitting
x = self.dropout(x)
# First FC layer with ReLU activation
x = F.relu(self.fc1(x))
# Apply Dropout again
x = self.dropout(x)
# Final FC layer to get logits
x = self.fc2(x)
# Output shape will be (Batch Size, num_classes)
# Note that the output is not passed through a softmax activation here, as it is typically done in the loss function (e.g., CrossEntropyLoss)
return x
CNN ã³ãŒããã¬ãŒãã³ã°äŸ
以äžã®ã³ãŒãã¯ãããã€ãã®ãã¬ãŒãã³ã°ããŒã¿ãçæããäžã§å®çŸ©ãã MY_NET ã¢ãã«ããã¬ãŒãã³ã°ããŸããæ³šç®ãã¹ãè峿·±ãå€ã¯æ¬¡ã®ãšããã§ãïŒ
EPOCHSã¯ãã¢ãã«ããã¬ãŒãã³ã°äžã«å šããŒã¿ã»ãããèŠãåæ°ã§ããEPOCH ãå°ãããããšãã¢ãã«ã¯ååã«åŠç¿ã§ããªãå¯èœæ§ããããŸãã倧ãããããšãéåŠç¿ããå¯èœæ§ããããŸããLEARNING_RATEã¯ãªããã£ãã€ã¶ã®ã¹ããããµã€ãºã§ããå°ããªåŠç¿çã¯åæãé ããªãå¯èœæ§ãããã倧ãããããšæé©è§£ããªãŒããŒã·ã¥ãŒãããåæã劚ããå¯èœæ§ããããŸããWEIGHT_DECAYã¯ã倧ããªéã¿ãããã«ãã£ããããšã«ãã£ãŠéåŠç¿ãé²ãã®ã«åœ¹ç«ã€æ£ååé ã§ãã
ãã¬ãŒãã³ã°ã«ãŒãã«é¢ããŠç¥ã£ãŠããã¹ãè峿·±ãæ å ±ã¯æ¬¡ã®ãšããã§ãïŒ
criterion = nn.CrossEntropyLoss()ã¯ããã«ãã¯ã©ã¹åé¡ã¿ã¹ã¯ã«äœ¿çšãããæå€±é¢æ°ã§ããããã¯ããœããããã¯ã¹æŽ»æ§åãšã¯ãã¹ãšã³ããããŒæå€±ã1ã€ã®é¢æ°ã«çµ±åããŠãããã¯ã©ã¹ããžãããåºåããã¢ãã«ã®ãã¬ãŒãã³ã°ã«é©ããŠããŸãã- ã¢ãã«ããã€ããªåé¡ãååž°ãªã©ã®ä»ã®ã¿ã€ãã®åºåãåºåããããšãæåŸ
ãããå Žåããã€ããªåé¡ã«ã¯
nn.BCEWithLogitsLoss()ãååž°ã«ã¯nn.MSELoss()ã®ãããªç°ãªãæå€±é¢æ°ã䜿çšããŸãã optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)ã¯ã深局åŠç¿ã¢ãã«ã®ãã¬ãŒãã³ã°ã«äººæ°ã®ããéžæè¢ã§ããAdamãªããã£ãã€ã¶ãåæåããŸããããã¯ãåŸé ã®1次ããã³2次ã¢ãŒã¡ã³ãã«åºã¥ããŠåãã©ã¡ãŒã¿ã®åŠç¿çãé©å¿ãããŸããoptim.SGDïŒç¢ºççåŸé éäžæ³ïŒãoptim.RMSpropã®ãããªä»ã®ãªããã£ãã€ã¶ãããã¬ãŒãã³ã°ã¿ã¹ã¯ã®ç¹å®ã®èŠä»¶ã«å¿ããŠäœ¿çšã§ããŸããmodel.train()ã¡ãœããã¯ãã¢ãã«ããã¬ãŒãã³ã°ã¢ãŒãã«èšå®ããããããã¢ãŠãããããæ£èŠåã®ãããªã¬ã€ã€ãŒãè©äŸ¡æãšã¯ç°ãªãåäœãããããã«ããŸããoptimizer.zero_grad()ã¯ãããã¯ã¯ãŒããã¹ã®åã«ãã¹ãŠã®æé©åããããã³ãœã«ã®åŸé ãã¯ãªã¢ããŸããããã¯ãPyTorchã§ã¯åŸé ãããã©ã«ãã§èç©ããããããå¿ èŠã§ããã¯ãªã¢ããªããšãåã®ã€ãã¬ãŒã·ã§ã³ã®åŸé ãçŸåšã®åŸé ã«å ç®ãããäžæ£ç¢ºãªæŽæ°ãè¡ãããŸããloss.backward()ã¯ãã¢ãã«ãã©ã¡ãŒã¿ã«å¯Ÿããæå€±ã®åŸé ãèšç®ããããããªããã£ãã€ã¶ãéã¿ãæŽæ°ããããã«äœ¿çšããŸããoptimizer.step()ã¯ãèšç®ãããåŸé ãšåŠç¿çã«åºã¥ããŠã¢ãã«ãã©ã¡ãŒã¿ãæŽæ°ããŸãã
import torch, torch.nn.functional as F
from torch import nn, optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from tqdm import tqdm
from sklearn.metrics import classification_report, confusion_matrix
import numpy as np
# ---------------------------------------------------------------------------
# 1. Globals
# ---------------------------------------------------------------------------
IMG_SIZE = 48 # model expects 48Ã48
NUM_CLASSES = 10 # MNIST has 10 digits
BATCH_SIZE = 64 # batch size for training and validation
EPOCHS = 5 # number of training epochs
LEARNING_RATE = 1e-3 # initial learning rate for Adam optimiser
WEIGHT_DECAY = 1e-4 # L2 regularisation to prevent overfitting
# Channel-wise mean / std for MNIST (grayscale â repeat for 3-channel input)
MNIST_MEAN = (0.1307, 0.1307, 0.1307)
MNIST_STD = (0.3081, 0.3081, 0.3081)
# ---------------------------------------------------------------------------
# 2. Transforms
# ---------------------------------------------------------------------------
# 1) Baseline transform: resize + tensor (no colour/aug/no normalise)
transform_base = transforms.Compose([
transforms.Resize((IMG_SIZE, IMG_SIZE)), # ð¹ Resize â force all images to 48 Ã 48 so the CNN sees a fixed geometry
transforms.Grayscale(num_output_channels=3), # ð¹ GrayscaleâRGB â MNIST is 1-channel; duplicate into 3 channels for convnet
transforms.ToTensor(), # ð¹ ToTensor â convert PIL image [0â255] â float tensor [0.0â1.0]
])
# 2) Training transform: augment + normalise
transform_norm = transforms.Compose([
transforms.Resize((IMG_SIZE, IMG_SIZE)), # keep 48 Ã 48 input size
transforms.Grayscale(num_output_channels=3), # still need 3 channels
transforms.RandomRotation(10), # ð¹ RandomRotation(±10°) â small tilt ⢠rotation-invariance, combats overfitting
transforms.ColorJitter(brightness=0.2,
contrast=0.2), # ð¹ ColorJitter â pseudo-RGB brightness/contrast noise; extra variety
transforms.ToTensor(), # convert to tensor before numeric ops
transforms.Normalize(mean=MNIST_MEAN,
std=MNIST_STD), # ð¹ Normalize â zero-centre & scale so every channel â N(0,1)
])
# 3) Test/validation transform: only resize + normalise (no aug)
transform_test = transforms.Compose([
transforms.Resize((IMG_SIZE, IMG_SIZE)), # same spatial size as train
transforms.Grayscale(num_output_channels=3), # match channel count
transforms.ToTensor(), # tensor conversion
transforms.Normalize(mean=MNIST_MEAN,
std=MNIST_STD), # ð¹ keep test data on same scale as training data
])
# ---------------------------------------------------------------------------
# 3. Datasets & loaders
# ---------------------------------------------------------------------------
train_set = datasets.MNIST("data", train=True, download=True, transform=transform_norm)
test_set = datasets.MNIST("data", train=False, download=True, transform=transform_test)
train_loader = DataLoader(train_set, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(test_set, batch_size=256, shuffle=False)
print(f"Training on {len(train_set)} samples, validating on {len(test_set)} samples.")
# ---------------------------------------------------------------------------
# 4. Model / loss / optimiser
# ---------------------------------------------------------------------------
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = MY_NET(num_classes=NUM_CLASSES).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE, weight_decay=WEIGHT_DECAY)
# ---------------------------------------------------------------------------
# 5. Training loop
# ---------------------------------------------------------------------------
for epoch in range(1, EPOCHS + 1):
model.train() # Set model to training mode enabling dropout and batch norm
running_loss = 0.0 # sums batch losses to compute epoch average
correct = 0 # number of correct predictions
total = 0 # number of samples seen
# tqdm wraps the loader to show a live progress-bar per epoch
for X_batch, y_batch in tqdm(train_loader, desc=f"Epoch {epoch}", leave=False):
# 3-a) Move data to GPU (if available) ----------------------------------
X_batch, y_batch = X_batch.to(device), y_batch.to(device)
# 3-b) Forward pass -----------------------------------------------------
logits = model(X_batch) # raw class scores (shape: [B, NUM_CLASSES])
loss = criterion(logits, y_batch)
# 3-c) Backward pass & parameter update --------------------------------
optimizer.zero_grad() # clear old gradients
loss.backward() # compute new gradients
optimizer.step() # gradient â weight update
# 3-d) Statistics -------------------------------------------------------
running_loss += loss.item() * X_batch.size(0) # sum of (batch loss à batch size)
preds = logits.argmax(dim=1) # predicted class labels
correct += (preds == y_batch).sum().item() # correct predictions in this batch
total += y_batch.size(0) # samples processed so far
# 3-e) Epoch-level metrics --------------------------------------------------
epoch_loss = running_loss / total
epoch_acc = 100.0 * correct / total
print(f"[Epoch {epoch}] loss = {epoch_loss:.4f} | accuracy = {epoch_acc:.2f}%")
print("\nâ
Training finished.\n")
# ---------------------------------------------------------------------------
# 6. Evaluation on test set
# ---------------------------------------------------------------------------
model.eval() # Set model to evaluation mode (disables dropout and batch norm)
with torch.no_grad():
logits_all, labels_all = [], []
for X, y in test_loader:
logits_all.append(model(X.to(device)).cpu())
labels_all.append(y)
logits_all = torch.cat(logits_all)
labels_all = torch.cat(labels_all)
preds_all = logits_all.argmax(1)
test_loss = criterion(logits_all, labels_all).item()
test_acc = (preds_all == labels_all).float().mean().item() * 100
print(f"Test loss: {test_loss:.4f}")
print(f"Test accuracy: {test_acc:.2f}%\n")
print("Classification report (precision / recall / F1):")
print(classification_report(labels_all, preds_all, zero_division=0))
print("Confusion matrix (rows = true, cols = pred):")
print(confusion_matrix(labels_all, preds_all))
ååž°åãã¥ãŒã©ã«ãããã¯ãŒã¯ (RNN)
ååž°åãã¥ãŒã©ã«ãããã¯ãŒã¯ (RNN) ã¯ãæç³»åãèªç¶èšèªãªã©ã®é次ããŒã¿ãåŠçããããã«èšèšããããã¥ãŒã©ã«ãããã¯ãŒã¯ã®äžçš®ã§ããåŸæ¥ã®ãã£ãŒããã©ã¯ãŒããã¥ãŒã©ã«ãããã¯ãŒã¯ãšã¯ç°ãªããRNN ã«ã¯èªå·±ã«ãŒãããæ¥ç¶ããããããã«ããã·ãŒã±ã³ã¹å ã®ä»¥åã®å ¥åã«é¢ããæ å ±ãä¿æããé ãç¶æ ãç¶æã§ããŸãã
RNN ã®äž»ãªæ§æèŠçŽ ã¯æ¬¡ã®ãšããã§ãïŒ
- ååž°å±€ïŒãããã®å±€ã¯ãå ¥åã·ãŒã±ã³ã¹ã1ã€ã®ã¿ã€ã ã¹ããããã€åŠçããçŸåšã®å ¥åãšåã®é ãç¶æ ã«åºã¥ããŠé ãç¶æ ãæŽæ°ããŸããããã«ãããRNN ã¯ããŒã¿å ã®æéçäŸåé¢ä¿ãåŠç¿ã§ããŸãã
- é ãç¶æ ïŒé ãç¶æ ã¯ã以åã®ã¿ã€ã ã¹ãããããã®æ å ±ãèŠçŽãããã¯ãã«ã§ããåã¿ã€ã ã¹ãããã§æŽæ°ãããçŸåšã®å ¥åã«å¯Ÿããäºæž¬ãè¡ãããã«äœ¿çšãããŸãã
- åºåå±€ïŒåºåå±€ã¯ãé ãç¶æ ã«åºã¥ããŠæçµçãªäºæž¬ãçæããŸããå€ãã®å ŽåãRNN ã¯èšèªã¢ãã«ã®ãããªã¿ã¹ã¯ã«äœ¿çšãããåºåã¯ã·ãŒã±ã³ã¹å ã®æ¬¡ã®åèªã«å¯Ÿãã確çååžã§ãã
äŸãã°ãèšèªã¢ãã«ã§ã¯ãRNN ã¯ãThe cat sat on theããšããåèªã®ã·ãŒã±ã³ã¹ãåŠçããåã®åèªã«ãã£ãŠæäŸãããã³ã³ããã¹ãã«åºã¥ããŠæ¬¡ã®åèªãäºæž¬ããŸãããã®å Žåã¯ãmatãã§ãã
é·çæèšæ¶ (LSTM) ãšã²ãŒãä»ãååž°ãŠããã (GRU)
RNN ã¯ãèšèªã¢ãã«ãæ©æ¢°ç¿»èš³ãé³å£°èªèãªã©ã®é次ããŒã¿ãæ±ãã¿ã¹ã¯ã«ç¹ã«å¹æçã§ããããããæ¶å€±åŸé ã®ãããªåé¡ã«ãããé·æäŸåé¢ä¿ã«èŠåŽããããšããããŸãã
ããã«å¯ŸåŠããããã«ãé·çæèšæ¶ (LSTM) ãã²ãŒãä»ãååž°ãŠããã (GRU) ã®ãããªç¹æ®ãªã¢ãŒããã¯ãã£ãéçºãããŸããããããã®ã¢ãŒããã¯ãã£ã¯ãæ å ±ã®æµããå¶åŸ¡ããã²ãŒãã£ã³ã°ã¡ã«ããºã ãå°å ¥ããé·æäŸåé¢ä¿ããã广çã«æããããšãã§ããŸãã
- LSTMïŒLSTM ãããã¯ãŒã¯ã¯ãã»ã«ç¶æ ã®åºå ¥ãã®æ å ±ã®æµãã調æŽããããã«ã3ã€ã®ã²ãŒãïŒå ¥åã²ãŒããå¿åŽã²ãŒããåºåã²ãŒãïŒã䜿çšããé·ãã·ãŒã±ã³ã¹ã«ããã£ãŠæ å ±ãèšæ¶ãŸãã¯å¿åŽããããšãå¯èœã«ããŸããå ¥åã²ãŒãã¯ãå ¥åãšåã®é ãç¶æ ã«åºã¥ããŠæ°ããæ å ±ãã©ãã ã远å ããããå¶åŸ¡ããå¿åŽã²ãŒãã¯ã©ãã ãã®æ å ±ãç Žæ£ããããå¶åŸ¡ããŸããå ¥åã²ãŒããšå¿åŽã²ãŒããçµã¿åãããããšã§æ°ããç¶æ ãåŸãããŸããæåŸã«ãæ°ããã»ã«ç¶æ ãšå ¥åãåã®é ãç¶æ ãçµã¿åãããããšã§æ°ããé ãç¶æ ãåŸãããŸãã
- GRUïŒGRU ãããã¯ãŒã¯ã¯ãå ¥åã²ãŒããšå¿åŽã²ãŒããåäžã®æŽæ°ã²ãŒãã«çµ±åããããšã§ LSTM ã¢ãŒããã¯ãã£ãç°¡çŽ åããèšç®å¹çãé«ãã€ã€é·æäŸåé¢ä¿ãæããŸãã
LLM (å€§èŠæš¡èšèªã¢ãã«)
å€§èŠæš¡èšèªã¢ãã« (LLM) ã¯ãèªç¶èšèªåŠçã¿ã¹ã¯ã®ããã«ç¹å¥ã«èšèšãããæ·±å±€åŠç¿ã¢ãã«ã®äžçš®ã§ããèšå€§ãªããã¹ãããŒã¿ã§èšç·Žããã人éã®ãããªããã¹ããçæãããã質åã«çããããèšèªã翻蚳ããããããŸããŸãªèšèªé¢é£ã®ã¿ã¹ã¯ãå®è¡ãããã§ããŸãã LLM ã¯éåžžããã©ã³ã¹ãã©ãŒããŒã¢ãŒããã¯ãã£ã«åºã¥ããŠãããèªå·±æ³šæã¡ã«ããºã ã䜿çšããŠã·ãŒã±ã³ã¹å ã®åèªéã®é¢ä¿ãæããã³ã³ããã¹ããçè§£ããäžè²«ããããã¹ããçæããŸãã
ãã©ã³ã¹ãã©ãŒããŒã¢ãŒããã¯ãã£
ãã©ã³ã¹ãã©ãŒããŒã¢ãŒããã¯ãã£ã¯ãå€ãã® LLM ã®åºç€ã§ããããã¯ããšã³ã³ãŒããŒ-ãã³ãŒããŒæ§é ã§æ§æãããŠããããšã³ã³ãŒããŒãå ¥åã·ãŒã±ã³ã¹ãåŠçãããã³ãŒããŒãåºåã·ãŒã±ã³ã¹ãçæããŸãããã©ã³ã¹ãã©ãŒããŒã¢ãŒããã¯ãã£ã®äž»èŠãªæ§æèŠçŽ ã¯æ¬¡ã®ãšããã§ãïŒ
- èªå·±æ³šæã¡ã«ããºã ïŒãã®ã¡ã«ããºã ã«ãããã¢ãã«ã¯è¡šçŸãçæããéã«ã·ãŒã±ã³ã¹å ã®ç°ãªãåèªã®éèŠæ§ãéã¿ä»ãã§ããŸããåèªéã®é¢ä¿ã«åºã¥ããŠæ³šæã¹ã³ã¢ãèšç®ããã¢ãã«ãé¢é£ããã³ã³ããã¹ãã«çŠç¹ãåœãŠãããšãå¯èœã«ããŸãã
- ãã«ããããæ³šæïŒãã®ã³ã³ããŒãã³ãã¯ãã¢ãã«ãè€æ°ã®æ³šæãããã䜿çšããŠåèªéã®è€æ°ã®é¢ä¿ãæããããšãå¯èœã«ãããããããå ¥åã®ç°ãªãåŽé¢ã«çŠç¹ãåœãŠãŸãã
- äœçœ®ãšã³ã³ãŒãã£ã³ã°ïŒãã©ã³ã¹ãã©ãŒããŒã«ã¯åèªã®é åºã«é¢ããçµã¿èŸŒã¿ã®æŠå¿µããªããããäœçœ®ãšã³ã³ãŒãã£ã³ã°ãå ¥ååã蟌ã¿ã«è¿œå ãããã·ãŒã±ã³ã¹å ã®åèªã®äœçœ®ã«é¢ããæ å ±ãæäŸããŸãã
æ¡æ£ã¢ãã«
æ¡æ£ã¢ãã«ã¯ãæ¡æ£ããã»ã¹ãã·ãã¥ã¬ãŒãããããšã«ãã£ãŠããŒã¿ãçæããããšãåŠç¿ããçæã¢ãã«ã®äžçš®ã§ããç»åçæã®ãããªã¿ã¹ã¯ã«ç¹ã«å¹æçã§ãè¿å¹Žäººæ°ãéããŠããŸãã æ¡æ£ã¢ãã«ã¯ãåçŽãªãã€ãºååžãäžé£ã®æ¡æ£ã¹ããããéããŠè€éãªããŒã¿ååžã«åŸã ã«å€æããããšã«ãã£ãŠæ©èœããŸããæ¡æ£ã¢ãã«ã®äž»èŠãªæ§æèŠçŽ ã¯æ¬¡ã®ãšããã§ãïŒ
- åæ¹æ¡æ£ããã»ã¹ïŒãã®ããã»ã¹ã¯ãããŒã¿ã«åŸã ã«ãã€ãºã远å ããåçŽãªãã€ãºååžã«å€æããŸããåæ¹æ¡æ£ããã»ã¹ã¯ãåã¬ãã«ãããŒã¿ã«è¿œå ãããç¹å®ã®éã®ãã€ãºã«å¯Ÿå¿ããäžé£ã®ãã€ãºã¬ãã«ã«ãã£ãŠå®çŸ©ãããŸãã
- éæ¡æ£ããã»ã¹ïŒãã®ããã»ã¹ã¯ãåæ¹æ¡æ£ããã»ã¹ãé転ãããããšãåŠç¿ããããŒã¿ãåŸã ã«ããã€ãºããŠã¿ãŒã²ããååžãããµã³ãã«ãçæããŸããéæ¡æ£ããã»ã¹ã¯ããã€ãºã®ãããµã³ãã«ããå ã®ããŒã¿ãåæ§ç¯ããããã«ã¢ãã«ãä¿ãæå€±é¢æ°ã䜿çšããŠèšç·ŽãããŸãã
ããã«ãããã¹ãããã³ããããç»åãçæããããã«ãæ¡æ£ã¢ãã«ã¯é垞次ã®ã¹ãããã«åŸããŸãïŒ
- ããã¹ããšã³ã³ãŒãã£ã³ã°ïŒããã¹ãããã³ããã¯ãããã¹ããšã³ã³ãŒããŒïŒäŸïŒãã©ã³ã¹ãã©ãŒããŒããŒã¹ã®ã¢ãã«ïŒã䜿çšããŠæœåšè¡šçŸã«ãšã³ã³ãŒããããŸãããã®è¡šçŸã¯ãããã¹ãã®æå³çãªæå³ãæããŸãã
- ãã€ãºãµã³ããªã³ã°ïŒã¬ãŠã¹ååžããã©ã³ãã ãªãã€ãºãã¯ãã«ããµã³ããªã³ã°ãããŸãã
- æ¡æ£ã¹ãããïŒã¢ãã«ã¯äžé£ã®æ¡æ£ã¹ããããé©çšãããã€ãºãã¯ãã«ãããã¹ãããã³ããã«å¯Ÿå¿ããç»åã«åŸã ã«å€æããŸããåã¹ãããã§ã¯ãç»åãããã€ãºããããã«åŠç¿ããã倿ãé©çšããŸãã
Tip
AWSãããã³ã°ãåŠã³ãå®è·µããïŒ
HackTricks Training AWS Red Team Expert (ARTE)
GCPãããã³ã°ãåŠã³ãå®è·µããïŒHackTricks Training GCP Red Team Expert (GRTE)
Azureãããã³ã°ãåŠã³ãå®è·µããïŒ
HackTricks Training Azure Red Team Expert (AzRTE)
HackTricksããµããŒããã
- ãµãã¹ã¯ãªãã·ã§ã³ãã©ã³ã確èªããŠãã ããïŒ
- **ð¬ Discordã°ã«ãŒããŸãã¯ãã¬ã°ã©ã ã°ã«ãŒãã«åå ããããTwitter ðŠ @hacktricks_liveããã©ããŒããŠãã ããã
- HackTricksããã³HackTricks Cloudã®GitHubãªããžããªã«PRãæåºããŠãããã³ã°ããªãã¯ãå ±æããŠãã ããã


