Follow
Hiroki Naganuma
Hiroki Naganuma
Other names長沼大樹
Université de Montréal, Mila - Quebec Artificial Intelligence Institute, PhD Student
Verified email at mila.quebec - Homepage
Title
Cited by
Cited by
Year
Accelerating matrix multiplication in deep learning by using low-rank approximation
K Osawa, A Sekiya, H Naganuma, R Yokota
2017 International Conference on High Performance Computing & Simulation …, 2017
272017
Empirical Study on Optimizer Selection for Out-of-Distribution Generalization
Hiroki Naganuma, Kartik Ahuj, Shiro Takag, Tetsuya Motokawa, Rio Yokota ...
Transactions on Machine Learning Research, 2023
10*2023
Augmenting NER datasets with LLMs: towards automated and refined annotation
Y Naraki, R Yamaki, Y Ikeda, T Horie, K Yoshida, R Shimizu, ...
arXiv preprint arXiv:2404.01334, 2024
62024
No wrong turns: The simple geometry of neural networks optimization paths
C Guille-Escuret, H Naganuma, K Fatras, I Mitliagkas
arXiv preprint arXiv:2306.11922, 2023
62023
Optimal transport meets noisy label robust loss and mixup regularization for domain adaptation
K Fatras, H Naganuma, I Mitliagkas
Conference on Lifelong Learning Agents, 966-981, 2022
62022
A Performance Improvement Approach for Second-Order Optimization in Large Mini-batch Training
H Naganuma, R Yokota
CCGRID 2019, 696-703, 2019
52019
Necessary and Sufficient Hypothesis of Curvature: Understanding Connection Between Out-of-Distribution Generalization and Calibration
H Naganuma, M Kimura
ICLR2023 Workshop on Domain Generalization, 2023
42023
Accelerating Convolutional Neural Networks Using Low Precision Arithmetic
H Naganuma, R Yokota
HPC Asia 2018, 2018
42018
An Empirical Investigation of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration
H Naganuma, R Hataya, I Mitliagkas
arXiv preprint arXiv:2307.08187, 2023
32023
How Image Corruption and Perturbation Affect Out-Of-Distribution Generalization and Calibration
K Tada, H Naganuma
International Joint Conference on Neural Networks (IJCNN 2023), 2023
32023
低ランク近似を用いた深層学習の行列積の高速化
関谷翠, 大沢和樹, 長沼大樹, 横田理央
研究報告ハイパフォーマンスコンピューティング (HPC) 2017 (24), 1-7, 2017
32017
Smoothness-Adaptive Sharpness-Aware Minimization for Finding Flatter Minima
H Naganuma, JL Kim, A Kyrillidis, I Mitliagkas
5th Workshop on practical ML for limited/low resource settings, 2024
22024
Towards Understanding Variants of Invariant Risk Minimization through the Lens of Calibration
K Yoshida, H Naganuma
Transactions on Machine Learning Research, 2024
12024
Takeuchi's Information Criteria as Generalization Measures for DNNs Close to NTK Regime
H Naganuma, T Suzuki, R Yokota, M Nomura, K Ishikawa, I Sato
12021
Geometric insights into focal loss: Reducing curvature for enhanced model calibration
M Kimura, H Naganuma
Pattern Recognition Letters, 2025
2025
Pseudo-Asynchronous Local SGD: Robust and Efficient Data-Parallel Training
H Naganuma, X Zhang, MC Yue, I Mitliagkas, RJ Hewett, PA Witte, ...
OPT 2024: Optimization for Machine Learning, 2024
2024
Mastering Task Arithmetic: Jp as a Key Indicator for Weight Disentanglement
K Yoshida, Y Naraki, T Horie, R Yamaki, R Shimizu, Y Saito, J McAuley, ...
NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles …, 2024
2024
A Survey on Product Placement Strategies: Evaluating Effectiveness and Persuasion Resistance
L Fujima, Y Mamiya, H Naganuma
Available at SSRN 4649030, 2023
2023
Story-to-Images Translation: Leveraging Diffusion Models and Large Language Models for Sequence Image Generation
H Kumagai, R Yamaki, H Naganuma
Proceedings of the 2nd Workshop on User-centric Narrative Summarization of …, 2023
2023
An Empirical Study of Pre-trained Model Selection for Out-of-Distribution Generalization and Calibration
H Naganuma, R Hataya, I Mitliagkas
arXiv preprint arXiv:2307.08187, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–20