This webpage offers a curated, category-wise collection of resources in Artificial Intelligenceโincluding courses, books, playlists, research papers, blogs, code snippets, and repositories .
More coming soon..
๐ Back to Categories
# | ๐ Book Name | ๐ Link |
---|---|---|
1 | Deep Learning โ Ian Goodfellow | Link |
2 | Understanding Deep Learning | Link |
3 | Dive into Deep Learning | Link |
4 | The Little Book of Deep Learning | Link |
5 | Grokking Deep Learning | Link |
6 | Practical Deep Learning for Coders โ fastai | Link |
7 | Meta Learning โ How To Learn Deep Learning And Thriveโฆ | Link |
8 | David MacKay โ Information Theory, Inference, and Learning Algorithms | Link |
๐ Back to Categories
# | ๐ฅ Course Name | ๐ Link |
---|---|---|
1 | DeepLearning.AI | Link |
2 | NYU Deep Learning โ Yann LeCun | Link |
3 | The Complete Mathematics of Neural Networks and Deep Learning | Link |
4 | Intro to Deep Learning โ Sebastian Raschka | Link |
5 | Practical Deep Learning for Coders โ fastai | Link |
6 | Full Stack Deep Learning โ 2022 | Link |
7 | David MacKay โ Information Theory, Pattern Recognition, and Neural Networks | Link |
8 | UC Berkeley CS 182: Deep Learning | Link |
9 | MIT โ Introduction to Deep Learning | Link |
10 | CS231n โ Deep Learning for Computer Vision | Link |
11 | CS224d โ Deep Learning for Natural Language Processing | Link |
12 | Machine Learning - Caltech by Yaser Abu-Mostafa (2012-2014) | Link |
13 | Neural networks class by Hugo Larochelle from Universitรฉ de Sherbrooke (2013) | Link |
14 | A.I - MIT by Patrick Henry Winston (2010) | Link |
15 | Vision and learning - computers and brains by Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013) | Link |
16 | Deep Learning for Natural Language Processing - Stanford(2017) | Link |
17 | Machine Learning - Oxford (2014-2015) | Link |
18 | Deep Learning - UWaterloo by Prof. Ali Ghodsi at University of Waterloo (2015) | Link |
19 | Statistical Machine Learning - CMU by Prof. Larry Wasserman | Link |
20 | Introduction to Deep Learning by Prof. Bhiksha Raj (2017) | Link |
21 | Deep Learning - UC Berkeley STAT-157 by Alex Smola and Mu Li (2019) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | CNN from Scratch with pure Mathematical Intuition | Link |
2 | Convolutional Neural Network (CNN): A Complete Guide | Link |
3 | CNN Explainer | Link |
4 | ConvNetJS โ Deep Learning in your browser | Link |
5 | Convolutional Neural Networks Explained (CNN Visualized) | Link |
6 | CNNs from different viewpoints | Link |
7 | Image Kernels | Link |
8 | Visualizing what ConvNets learn | Link |
9 | Convolutions in Image Processing | Link |
10 | Understanding โconvolutionโ operations in CNN | Link |
11 | Convolutional Neural Networks, Explained | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Zero to Mastery Learn PyTorch for Deep Learning | Link |
2 | Learn PyTorch for deep learning in a day. Literally. | Link |
3 | PyTorch internals - ezyangโs blog | Link |
4 | MiniTorch | Link |
5 | PyTorch is dead. Long live JAX. - Blog | Link |
6 | Inside the Matrix: Visualizing Matrix Multiplication, Attention and Beyond | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link | ย |
---|---|---|---|
1 | Programming Massively Parallel Processors(2021) | Link | ย |
2 | Programming Massively Parallel Processors(2019) | Link | ย |
3 | Programming Massively Parallel Processors Book | Link | ย |
4 | CUDA C++ Programming Guide | Link | ย |
5 | How GPU Computing Works | GTC 2021 | Link |
6 | GPU Programming: When, Why and How? | Link | ย |
7 | Making Deep Learning Go Brrrr From First Principles | Link | ย |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | COS 484: Natural Language Processing Spring 2025 | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | FlexAttention: The Flexibility of PyTorch with the Performance of FlashAttention | Link |
2 | Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, the Worldโs Largest and Most Powerful Generative Language Model | link |
3 | Understanding GPU Memory 1: Visualizing All Allocations over Time | link |
4 | Visualize and understand GPU memory in PyTorch | link |
5 | Data-Parallel Distributed Training of Deep Learning Models | link |
6 | Scaling Language Model Training to a Trillion Parameters Using Megatron | link |
7 | Bringing HPC Techniques to Deep Learning | link |
8 | Ring Attention Explained | link |
9 | Training your large model with DeepSpeed | link |
10 | Visualizing 6D Mesh Parallelism | link |
11 | Building Metaโs GenAI Infrastructure | link |
12 | 100,000 H100 Clusters: Power, Network Topology, Ethernet vs InfiniBand, Reliability, Failures, Checkpointing | link |
13 | gpu | link |
14 | Mixture of Experts Explained | link |
15 | Go smol or go home | link |
16 | In the long (context) run | link |
17 | Introducing Async Tensor Parallelism in PyTorch | link |
18 | A guide to PyTorchโs CUDA Caching Allocator | link |
19 | Transformer Math (Part 1) - Counting Model Parameters | link |
20 | Activation Memory: A Deep Dive using PyTorch | link |
21 | Conference Talk 12: Slaying OOMs with PyTorch FSDP and torchao | link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Recurrent Neural Networks Tutorial, Part 1 โ Introduction to RNNs | Link |
2 | Understanding LSTM Networks | Link |
3 | Predict Stock Prices Using RNN: Part 1 | Link |
4 | Recurrent Neural Networks (RNN) - Made With ML | Link |
5 | RNNs and LSTMs - jurafsky, stanford | Link |
6 | The Unreasonable Effectiveness of Recurrent Neural Networks - Karpathy | Link |
7 | NLP from Scratch - PyTorch | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Attention is all you need (Transformer) Umar Jamil | Link |
2 | Build a Large Language Model (From Scratch) - Sebastian Raschka | Link |
3 | Create a Large Language Model from Scratch with Python - Tutorial | Link |
4 | Intro to Transformers (slides) - giffmana | Link |
5 | [ML 2024] Transformers - Lucas Beyer (giffmana) | Link |
6 | TRANSFORMER EXPLAINER - Polo Club | Link |
7 | The Illustrated GPT-2 (Visualizing Transformer Language Models) | Link |
8 | ATTENTION IS ALL YOU NEED - Implementation | Link |
9 | Linear Relationships in the Transformerโs Positional Encoding | Link |
10 | Implement and Train ViT From Scratch for Image Recognition - PyTorch | Link |
11 | a smol course - huggingface | Link |
12 | HOW I Studied LLMs in Two Weeks: A Comprehensive Roadmap | Link |
13 | HOW LARGE LANGUAGE MODELS work - From zero to ChatGPT | Link |
14 | Building effective agents - Anthropic | Link |
15 | LLM VISUALIZATION | Link |
16 | LLM course - huggingface | Link |
17 | Neural Networks: Zero to Hero | Link |
18 | Stanford CS229 (2023) | Link |
19 | Building an LLM from Scratch (Sebastian Raschka, 2024) | Link |
20 | General Audience Large Language Models (Andrej Karpathy, 2024) | Link |
21 | Foundations of Large Language Modelsโ by Tong Xiao and Jingbo Zhu | Link |
22 | Hands-On Large Language Models | Link |
23 | The Illustrated Transformer | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Blog | Link |
2 | Neural Networks: Zero to Hero | Link |
3 | CS231n Winter 2016 | Link |
4 | CS231n: Convolutional Neural Networks for Visual Recognition | Link |
5 | EurekaLabsAI | Link |
6 | Github | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Link |
2 | MIXED PRECISION TRAINING | Link |
3 | FP8-LM: Training FP8 Large Language Models | Link |
4 | Small-scale proxies for large-scale Transformer training instabilities | link |
5 | BREADTH-FIRST PIPELINE PARALLELISM | Link |
6 | DeepSeek-V3 Technical Report | Link |
7 | ZERO BUBBLE PIPELINE PARALLELISM | link |
8 | Mixtral of Experts | link |
9 | Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity | link |
10 | A Survey on Mixture of Experts in Large Language Models | Link |
11 | GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding | link |
12 | An Empirical Model of Large-Batch Training | link |
13 | Reducing Activation Recomputation in Large Transformer Models | link |
14 | Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism | link |
15 | PaLM: Scaling Language Modeling with Pathways | link |
16 | Gemini: A Family of Highly Capable Multimodal Models | link |
17 | The Llama 3 Herd of Models | Link |
18 | ZeRO: Memory Optimizations Toward Training Trillion Parameter Models | link |
19 | PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel | link |
20 | Fire-Flyer AI-HPC: A Cost-Effective Software-Hardware Co-Design for Deep Learning | link |
21 | Fast Transformer Decoding: One Write-Head is All You Need | link |
22 | GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints | link |
23 | Domino: Eliminating Communication in LLM Training via Generic Tensor Slicing and Overlapping | link |
24 | Ring Attention with Blockwise Transformers for Near-Infinite Context | link |
25 | STRIPED ATTENTION:FASTER RING ATTENTION FOR CAUSAL TRANSFORMERS | link |
26 | ย | ย |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | How to Scale Your Model by google | Link |
2 | The Ultra-Scale Playbook: Training LLMs on GPU Clusters by Huggingface | Link |
3 | Tiny LLM - LLM Serving in a Week | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Llama from scratch | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Model Context Protocol (MCP) Course by HuggingFace | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Machine Learning Specialization (Coursera) | Link |
2 | A Visual Introduction to Machine Learning | Link |
3 | Visual explanations of core machine learning concepts | Link |
4 | Papers & tech blogs by companies sharing their work on data science & machine learning in production. | Link |
5 | CS229: Machine Learning | Link |
6 | Pen and Paper Exercises in Machine Learning | Link |
7 | Interpretable Machine Learning | Link |
8 | math for data science and machine learning | Link |
9 | Machine Learning: Probabilistic Perspective | Link |
10 | XGBOOSTING | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Whatโs Really Going On in Machine Learning? Some Minimal Models | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Deep Learning for computer vision, by Andrej Karpathy | Link |
2 | Computer Vision & Deep Learning (freeCodeCamp) | Link |
3 | Computer Vision with Prof. Tom Yeh | Link |
4 | Computer vision for dummies | Link |
5 | Training CLIP Model from Scratch for an Fashion Image Retrieval App | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | How to build an LLM inference engine using C++ and CUDA from scratch without libraries | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | The only video you need to Master N8N + AI agents (For complete beginners) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | A survivorโs guide to Artificial Intelligence courses at Stanford (Updated Feb 2020) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Vision Transformers Need Registers(2024) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | MLOps guide | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Introduction to Machine Learning Interviews Book | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | O3 beats a master-level GeoGuessr player, even with fake EXIF data | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Understanding Reasoning LLMs by Sebastian Raschka | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Relational Databases vs Vector Databases | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | DeepSeek-Prover-V2 | Link |
2 | GPT-1 โ Improving Language Understanding by Generative Pre-Training(2018) | Link |
3 | GPT-2 โ Language Models are Unsupervised Multitask Learners (2019) | Link |
4 | GPT-3 โ Language Models are Few-Shot Learners(2020) | Link |
5 | ChatGPT :Trained with RLHF โ Reinforcement Learning from Human Feedback (Ouyang et al., 2022) | Link |
6 | GPT-4 โ GPT-4 Technical Report (2023) | Link |
7 | Claude (Anthropic) Constitutional AI: Harmlessness from AI Feedback (2022) | Link |
8 | Gemini: A Family of Highly Capable Multimodal Models (2023) | Link |
9 | Start building with Gemini 2.5 Flash(2025) | Link |
10 | Gemma (Google) Gemma: Open Models for Responsible AI(2024) | Link |
11 | Gemma 3 Technical Report(2025) | Link |
12 | LLaMA Series (Meta AI) LLaMA: Open and Efficient Foundation Language Models(2023) | Liink |
13 | LLaMA 2: Improved training and safety (2023) | Link |
14 | Llama 3:The Llama 3 Herd of Models | Link |
15 | Llama 4:The beginning of a new era of natively multimodal AI innovation | Link |
16 | Mistral AI(France) Mistral 7B: Grouped-query attention (2023) | Link |
17 | Kimi by Moonshot AI (China) Scaling RL with LLMs: Technical Report of Kimi k1.5 (2025) | Link |
18 | DeepSeek(China) DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models | Link |
19 | DeepSeek-V3 Technical Report (2024) | Link |
20 | DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning | Link |
21 | Qwen (China) Qwen Technical Report(2023) | Link |
22 | Qwen2 Technical Report(2024) | Link |
23 | Qwen2.5 Technical Report(2024) | Link |
24 | Qwen2.5-Omni Technical Report Multimodel (2025) | Link |
25 | Qwen3: Think Deeper, Act Faster (2025) | Link |
26 | Phi-4-reasoning Technical Report (2025) | Link |
27 | Phi-4-Mini-Reasoning: Exploring the Limits of Small Reasoning Language Models in Math (2025) | Link |
28 | OpenAIโs GPT-3 Language Model: A Technical Overview (2020) | link |
OpenAI
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | LLM-powered phone GUI agents in phone automation | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Diffusion Models from statistical first principles | Link |
2 | implement Diffusion Models from scratch w/ Transformer | Link |
3 | Denoising Diffusion Probabilistic Models (Ho et al., 2020) | Link |
4 | Playlist to learn Diffusion models | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | ๐๐ฎ๐ป๐ด๐๐ฎ๐ด๐ฒ ๐ ๐ผ๐ฑ๐ฒ๐น๐ถ๐ป๐ด ๐ณ๐ฟ๐ผ๐บ ๐ฆ๐ฐ๐ฟ๐ฎ๐๐ฐ๐ต Stanford University ๐ก๐๐ฃ | Link |
2 | NLP Demystified | Link |
3 | 1.5 Stemming, Lemmatization, Stopwords, POS Tagging | Link |
4 | A curated list of resources dedicated to Natural Language Processing (NLP) | Link |
5 | Excited to teach Advanced NLP at CMU this semester | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Learning Representations by Back-Propagating Errors (Rumelhart et al., 1986) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Reward Modeling Part 1: Bradley-Terry Model | Link |
2 | An interpretable reward modeling approach | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Interpreting Language Model Preferences Through the Lens of Decision Trees | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Linear Relationships in the Transformerโs Positional Encoding | Link |
2 | Transformer Architecture: The Positional Encoding | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Attention Is All You Need (Vaswani et al., 2017) | Link |
2 | Implement Flash Attention Backend in SGLang - Basics and KV Cache(2025) | Link |
3 | Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | BERT: Pre-training of Deep Bidirectional Transformers (Devlin et al., 2018) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Chunking Strategies for LLM Applications(2023) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Alignment Guidebook | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Language Models are Few-Shot Learners (Brown et al., 2020) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Chain of Thought Prompting (Wei et al., 2022) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Scaling Laws for Neural Language Models (Kaplan et al., 2020) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | AGI is not a milestone | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | When ChatGPT Broke an Entire Field: An Oral History(2025) | Link |
2 | From Large Language Models to Reasoning Language Models - Three Eras in The Age of Computation. | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Direct Preference Optimization (Rafailov et al., 2023) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | LoRA: Low-Rank Adaptation (Hu et al., 2021) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Fine-Tuning vs Retrieval Augmented Generation(2023) | Link |
2 | ย | ย |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Retrieval-Augmented Generation (Lewis et al., 2020) | Link |
2 | Advanced RAG: Precise Zero-Shot Dense Retrieval with HyDE | Link |
3 | Retrieval Augmented Generation (RAG) from Scratch โ Tutorial For Dummies | Link |
4 | Multi-modal RAG | Link |
5 | Beginnerโs Guide to RAG by Tom Yeh | Link |
6 | Retrieval Augmented Generation ,Ragas | Link |
7 | Simplest Method to improve RAG pipeline: Re-Ranking (2023) | link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Reinforcement Learning from Human Feedback by Nathan Lambert | Link |
2 | RLHF: Reinforcement Learning from Human Feedback by Chip Huyen | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Berkeley CS 294: Deep Reinforcement Learning | Link |
2 | Spinning Up in Deep Reinforcement Learning - A free deep reinforcement learning course by OpenAI (2019) | Link |
3 | comprehensive overview of Reinforcement Learning methods | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | ๐ง๐๐ผ ๐ ๐ถ๐ป๐๐๐ฒ ๐ฃ๐ฎ๐ฝ๐ฒ๐ฟ๐ | Link |
2 | ๐๐ฒ๐ฒ๐ฝ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐๐ | Link |
3 | ๐๐ฒ๐ ๐๐ฟ๐ถ๐ฑ๐บ๐ฎ๐ป | Link |
4 | 3๐๐น๐๐ฒ1๐๐ฟ๐ผ๐๐ป | Link |
5 | ๐๐ป๐ฑ๐ฟ๐ฒ๐ท ๐๐ฎ๐ฟ๐ฝ๐ฎ๐๐ต๐ | Link |
6 | ๐ฆ๐ฒ๐ป๐๐ฑ๐ฒ๐ | Link |
7 | ๐ ๐ฎ๐๐ ๐ช๐ผ๐น๐ณ๐ฒ | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | ๐ง๐ผ๐๐ฎ๐ฟ๐ฑ๐๐๐ฎ๐๐ฎ๐ฆ๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ฒ | Link |
2 | ๐ข๐ฝ๐ฒ๐ป๐๐ ๐๐น๐ผ๐ด | Link |
3 | ๐ ๐ฎ๐ฟ๐ธ๐ง๐ฒ๐ฐ๐ต๐ฃ๐ผ๐๐ | Link |
4 | ๐๐ฒ๐ฒ๐ฝ๐ ๐ถ๐ป๐ฑ ๐๐น๐ผ๐ด | Link |
5 | ๐๐ป๐๐ต๐ฟ๐ผ๐ฝ๐ถ๐ฐ ๐๐น๐ผ๐ด | Link |
6 | ๐๐ฒ๐ฟ๐ธ๐ฒ๐น๐ฒ๐ ๐๐ฎ๐ถ๐ฟ | Link |
7 | ๐๐๐ด๐ด๐ถ๐ป๐ด๐ณ๐ฎ๐ฐ๐ฒ ๐๐น๐ผ๐ด | Link |
8 | google Research | Link |
9 | Mehmet Burak Sayฤฑcฤฑ | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | LLM Embedding Explained | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Matrix Calulus for Machine Learning and Beyond | Link |
2 | history of mathematics | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | LLMs work by 3b1b | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Python Crash course | Link |
2 | Cloud Data Science for Dummies | Link |
3 | Cost-Effective Data Pipelines | Link |
4 | DATA ENGINEER With Python | Link |
5 | Data Pipelines Pocket Reference | Link |
6 | Data Internals A Deep Dive into How Distributed data systems work | Link |
7 | Deciphering Data Architectures | Link |
8 | Foundations of Scalable systems | Link |
9 | Fundamentals of Data Engineering_ Plan and Build Robust Data Systems | Link |
10 | Hadoop The Definitive Guide | Link |
11 | Introduction to Machine Learning with Python | Link |
12 | SQL for Data Analysis | Link |
13 | Storytelling with Data_ A Data Visualization Guide for Business Professionals | Link |
14 | Terraform Up and Running | Link |
15 | The Data Engineer Skills & Tools Guide | Link |
16 | The Data Warehouse Toolkit | Link |
17 | Think Stats, 2nd Edition_ Exploratory Data Analysis | Link |
18 | kafka the definitive guide | Link |
19 | practical synthetic data generation balancing privacy and the broad availability of data | Link |
20 | Understanding Deep Learning | Link |
21 | Dive into Deep Learning | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Asynchronous Deep Reinforcement Learning (Google Deepmind 2016) | Link |
2 | Reinforcement Learning from Human (OpenAI 2017) | Link |
3 | Proximal Policy Optimization (OpenAI 2017) | Link |
4 | Fine-Tuning Language Models from Human Preferences (OpenAI 2020) | Link |
5 | Learning to Summarize from Human Feedback (OpenAI 2022) | Link |
6 | Direct Preference Optimization( Stanford University 2023) | Link |
7 | Group Relative Policy Optimization ( DeepSeek 2024) | Link |
8 | Reinforcement learning with verifiable rewards (DeepSeek 2025) | Link |
9 | Reinforcement Learning from Human Feedback (Nathan Lambert) | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | OUTRAGEOUSLY LARGE NEURAL NETWORKS: THE SPARSELY-GATED MIXTURE-OF-EXPERTS LAYER | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Large Language Model Agents ,(Fall 2024) | Link |
2 | Advanced Large Language Model Agents(spring 2025) | Link |
3 | CS 294-131: Special Topics in Deep Learning Fall, 2016 | Link |
4 | CS 294-131: Special Topics in Deep Learning Spring 2017 | Link |
5 | CS 294-131: Special Topics in Deep Learning Fall 2017 | Link |
6 | CS 294-131: Special Topics in Deep Learning Spring 2018 | Link |
7 | CS 294-131: Trustworthy Deep Learning (Special Topics in Deep Learning) Spring 2019 | Link |
8 | CS294/194-196: Responsible GenAI and Decentralized Intelligence Fall 2023 | Link |
9 | CS294-267/CS194-267 Understanding Large Language Models: Foundations and Safety Spring 24 | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | finetune Phi-4 for free on Colab | Link |
2 | Understanding Parameter-Efficient Finetuning of Large Language Models: From Prefix Tuning to LLaMA-Adapters | Link |
3 | Practical Tips for Finetuning LLMs Using LoRA (Low-Rank Adaptation) | Link |
4 | PEFT: Parameter-Efficient Fine-Tuning of Billion-Scale Models on Low-Resource Hardware | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Tensor Product Attention Is All You Need | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Supervised Learning: A Comprehensive Guide | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | started with AI from Basics to Advance as were taught to me at IISC Bangalore as part of Mtech AI | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Hugging Face Agents Course | Link |
2 | Agents by Chip Huyen | Link |
3 | Large Language Model Agents MOOC, Fall 2024 | Link |
4 | Advanced Large Language Model Agents MOOC, Spring 2025 | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | lgorithms for AI & ML | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | prompt engineering white paper | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Practical Statistics for data scientists | Link |
2 | The Elements of Statistical Learning | Link |
3 | Naked Statistics: stripping the dread from the data | Link |
4 | How to Lie with Statistics | Link |
5 | All of Statistics | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | Microsoft launched the best course on Generative AI | Link |
๐ Back to Categories
# | ๐ Title | ๐ Link |
---|---|---|
1 | STATS 202: Data Mining and Analysis | Link |
2 | CS109: Introduction to Probability for Computer Scientists | Link |
3 | CS231N: Convolutional Neural Networks for Visual Recognition | Link |
4 | CS224N: Natural Language Processing with Deep Learning | Link |
5 | CS229: Machine Learning | Link |
6 | CS221: Artificial Intelligence: Principles and Techniques | Link |
7 | CS228: Probabilistic Graphical Models: Principles and Techniques | Link |
8 | CS234: Reinforcement Learning | Link |
9 | CS238: Decision Making under Uncertainty (AA 228) | Link |
10 | CS224W: Machine Learning with Graphs | Link |
11 | CS246: Mining Massive Data Sets | Link |
12 | CS230: Deep Learning | Link |
13 | CS236: Deep Generative Models | Link |
14 | EE263: Introduction to Linear Dynamical Systems | Link |
15 | CS336: Robot Perception and Decision-Making | Link |
๐ Back to Categories
# | ๐ Title | Week |
---|---|---|
1 | 1. Phi-4-Mini-Reasoning 2. Building Production-Ready AI Agents with Scalable Long-Term Memory 3. UniversalRAG 4.DeepSeek-Prover-V2 5. Kimi-Audio 6. MiMo-7B 7.Advances and Challenges in Foundation Agents 8.MAGI 9.A Survey of Efficient LLM Inference Serving 10. LLM for Engineering | (April 28 - May 4,2025) |