Haidar Khan
Haidar Khan
National Center for AI - Saudi Arabia
Verified email at - Homepage
Cited by
Cited by
Focal onset seizure prediction using convolutional networks
H Khan, L Marcuse, M Fields, K Swann, B Yener
IEEE Transactions on Biomedical Engineering 65 (9), 2109-2118, 2017
Alexa teacher model: Pretraining and distilling multi-billion-parameter encoders for natural language understanding systems
J FitzGerald, S Ananthakrishnan, K Arkoudas, D Bernardi, A Bhagia, ...
Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and …, 2022
AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model
S Soltan, S Ananthakrishnan, J FitzGerald, R Gupta, W Hamza, H Khan, ...
arXiv preprint arXiv:2208.01448, 2022
ASR N-Best Fusion Nets
X Liu, M Li, L Chen, P Wanigasekara, W Ruan, H Khan, W Hamza, C Su
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
Generation & Evaluation of Adversarial Examples for Malware Obfuscation
D Park, H Khan, B Yener
2019 18th IEEE International Conference On Machine Learning And Applications …, 2019
RescoreBERT: Discriminative Speech Recognition Rescoring With Bert
L Xu, Y Gu, J Kolehmainen, H Khan, A Gandhe, A Rastrow, A Stolcke, ...
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
Learning filter widths of spectral decompositions with wavelets
H Khan, B Yener
Advances in Neural Information Processing Systems 31, 4601-4612, 2018
Don't Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding
Q Zhu, H Khan, S Soltan, S Rawls, W Hamza
arXiv preprint arXiv:2010.03714, 2020
Optimal Mini-Batch Size Selection for Fast Gradient Descent
MP Perrone, H Khan, C Kim, A Kyrillidis, J Quinn, V Salapura
arXiv preprint arXiv:1911.06459, 2019
Deep density ratio estimation for change point detection
H Khan, L Marcuse, B Yener
arXiv preprint arXiv:1905.09876, 2019
Limitations of Knowledge Distillation for Zero-shot Transfer Learning
S Soltan, H Khan, W Hamza
Proceedings of the Second Workshop on Simple and Efficient Natural Language …, 2021
Alexatm 20b: Few-shot learning using a large-scale multilingual seq2seq model, 2022
S Soltan, S Ananthakrishnan, J FitzGerald, R Gupta, W Hamza, H Khan, ...
URL https://arxiv. org/abs/2208.01448, 0
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
MS Ozdayi, C Peris, J Fitzgerald, C Dupuy, J Majmudar, H Khan, R Parikh, ...
arXiv preprint arXiv:2305.11759, 2023
Controlled Data Generation via Insertion Operations for NLU
M Kumar, Y Merhav, H Khan, R Gupta, A Rumshisky, W Hamza
Proceedings of the 2022 Conference of the North American Chapter of the …, 2022
Unfreeze with Care: Space-Efficient Fine-Tuning of Semantic Parsing Models
W Sun, H Khan, N Guenon des Mesnards, M Rubino, K Arkoudas
Proceedings of the ACM Web Conference 2022, 999-1007, 2022
Squashed weight distribution for low bit quantization of deep models
N Ström, H Khan, W Hamza
Compressing Transformer-Based Semantic Parsing Models using Compositional Code Embeddings
P Prakash, SK Shashidhar, W Zhao, S Rongali, H Khan, M Kayser
arXiv preprint arXiv:2010.05002, 2020
Short Paper: Creating Adversarial Malware Examples using Code Insertion
D Park, H Khan, B Yener
CoRR abs/1904.04802, 2019
When Benchmarks are Targets: Revealing the Sensitivity of Large Language Model Leaderboards
N Alzahrani, HA Alyahya, Y Alnumay, S Alrashed, S Alsubaie, ...
arXiv preprint arXiv:2402.01781, 2024
Output Randomization: A Novel Defense for both White-box and Black-box Adversarial Models
D Park, H Khan, A Khan, A Gittens, B Yener
arXiv preprint arXiv:2107.03806, 2021
The system can't perform the operation now. Try again later.
Articles 1–20