Accelerating matrix multiplication in deep learning by using low-rank approximation K Osawa, A Sekiya, H Naganuma, R Yokota 2017 International Conference on High Performance Computing & Simulation …, 2017 | 27 | 2017 |
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization Hiroki Naganuma, Kartik Ahuj, Shiro Takag, Tetsuya Motokawa, Rio Yokota ... Transactions on Machine Learning Research, 2023 | 10* | 2023 |
Augmenting NER datasets with LLMs: towards automated and refined annotation Y Naraki, R Yamaki, Y Ikeda, T Horie, K Yoshida, R Shimizu, ... arXiv preprint arXiv:2404.01334, 2024 | 6 | 2024 |
No wrong turns: The simple geometry of neural networks optimization paths C Guille-Escuret, H Naganuma, K Fatras, I Mitliagkas arXiv preprint arXiv:2306.11922, 2023 | 6 | 2023 |
Optimal transport meets noisy label robust loss and mixup regularization for domain adaptation K Fatras, H Naganuma, I Mitliagkas Conference on Lifelong Learning Agents, 966-981, 2022 | 6 | 2022 |
A Performance Improvement Approach for Second-Order Optimization in Large Mini-batch Training H Naganuma, R Yokota CCGRID 2019, 696-703, 2019 | 5 | 2019 |
Necessary and Sufficient Hypothesis of Curvature: Understanding Connection Between Out-of-Distribution Generalization and Calibration H Naganuma, M Kimura ICLR2023 Workshop on Domain Generalization, 2023 | 4 | 2023 |
Accelerating Convolutional Neural Networks Using Low Precision Arithmetic H Naganuma, R Yokota HPC Asia 2018, 2018 | 4 | 2018 |
An Empirical Investigation of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration H Naganuma, R Hataya, I Mitliagkas arXiv preprint arXiv:2307.08187, 2023 | 3 | 2023 |
How Image Corruption and Perturbation Affect Out-Of-Distribution Generalization and Calibration K Tada, H Naganuma International Joint Conference on Neural Networks (IJCNN 2023), 2023 | 3 | 2023 |
低ランク近似を用いた深層学習の行列積の高速化 関谷翠, 大沢和樹, 長沼大樹, 横田理央 研究報告ハイパフォーマンスコンピューティング (HPC) 2017 (24), 1-7, 2017 | 3 | 2017 |
Smoothness-Adaptive Sharpness-Aware Minimization for Finding Flatter Minima H Naganuma, JL Kim, A Kyrillidis, I Mitliagkas 5th Workshop on practical ML for limited/low resource settings, 2024 | 2 | 2024 |
Towards Understanding Variants of Invariant Risk Minimization through the Lens of Calibration K Yoshida, H Naganuma Transactions on Machine Learning Research, 2024 | 1 | 2024 |
Takeuchi's Information Criteria as Generalization Measures for DNNs Close to NTK Regime H Naganuma, T Suzuki, R Yokota, M Nomura, K Ishikawa, I Sato | 1 | 2021 |
Geometric insights into focal loss: Reducing curvature for enhanced model calibration M Kimura, H Naganuma Pattern Recognition Letters, 2025 | | 2025 |
Pseudo-Asynchronous Local SGD: Robust and Efficient Data-Parallel Training H Naganuma, X Zhang, MC Yue, I Mitliagkas, RJ Hewett, PA Witte, ... OPT 2024: Optimization for Machine Learning, 2024 | | 2024 |
Mastering Task Arithmetic: Jp as a Key Indicator for Weight Disentanglement K Yoshida, Y Naraki, T Horie, R Yamaki, R Shimizu, Y Saito, J McAuley, ... NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles …, 2024 | | 2024 |
A Survey on Product Placement Strategies: Evaluating Effectiveness and Persuasion Resistance L Fujima, Y Mamiya, H Naganuma Available at SSRN 4649030, 2023 | | 2023 |
Story-to-Images Translation: Leveraging Diffusion Models and Large Language Models for Sequence Image Generation H Kumagai, R Yamaki, H Naganuma Proceedings of the 2nd Workshop on User-centric Narrative Summarization of …, 2023 | | 2023 |
An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration H Naganuma, R Hataya, I Mitliagkas arXiv preprint arXiv:2307.08187, 2023 | | 2023 |