Tags¶
- Metadata: #topic
- Part of: Artificial Intelligence Science Engineering Science Technology
- Related:
- Includes:
- Additional:
Significance¶
Intuitive summaries¶
Definitions¶
- A branch of artificial Intelligence that focuses on statistical algorithms that can effectively generalize and thus perform tasks without explicit instructions.
Technical summaries¶
Main resources¶
Landscapes¶
- By methods
- [[Instance-based algorithm]]
- [[Regression analysis]]
- [[Dimensionality reduction]]
- [[Ensemble learning]]
- [Meta learning](<./Meta-learning.md>)
- [[Reinforcement learning]]
- [[Supervised learning]]
- [[Bayesian statistics]]
- [[Decision tree algorithm]]
- [[Classifier]]
- [[Support-vector machines]]
- [[Unsupervised learning]]
- [Artificial neural network](<./Artificial neural network.md>)
- [[Association rule learning]]
- [[Hiearchical clustering]]
- [[Cluster analysis]]
- [[Anomaly detection]]
- [[Semi-supervised learning]]
- [Deep learning](<./Deep Learning.md>)
- By application
- [[Automating science]]
- [[Data mining]]
- [Computer vision](<./Computer vision.md>)
- [[Classification]]
- [[Bioinformatics]]
- [Natural language processing](<./Natural language processing.md>)
- [Large language model](<./Large language model.md>)
- [Transformer](<./Transformer.md>)
- [[Large multimodal model]]
- [[Pattern recognition]]
- [[Recommendation system]]
- [[Search engine]]
- [[Social engineering]]
- [Machine learning Applications - Wikipedia](https://en.wikipedia.org/wiki/Machine_learning#Applications
- Machine learning algorithms
- [[Gradient descent]]
- Connectionist artificial intelligence
- Hybrid artificial intelligence
- Generative artificial intelligence
- Quantum machine learning
- Thermodynamic AI
- Mechanistic interpretability
- Mathematical theory of artificial intelligence
- Meta-learning
- Online machine learning
- The landscape of the Machine Learning section of ArXiv.
Lists of resouces¶
GitHub - patrickloeber/ml-study-plan: The Ultimate FREE Machine Learning Study Plan GitHub - dair-ai/ML-YouTube-Courses: ๐บ Discover the latest machine learning / AI courses on YouTube. GitHub - yazdotai/machine-learning-video-courses: Comprehensive list of machine learning videos GitHub - mirerfangheibi/Machine-Learning-Resources: Free and High-Quality Materials to Study Deep Learning ML Resources GitHub - therealsreehari/Learn-Data-Science-For-Free: This repositary is a combination of different resources lying scattered all over the internet. The reason for making such an repositary is to combine all the valuable resources in a sequential manner, so that it helps every beginners who are in a search of free and structured learning resource for Data Science. For Constant Updates Follow me in Twitter. GitHub - openlists/MathStatsResources GitHub - mdozmorov/Statistics_notes: Statistics, data analysis tutorials and learning resources GitHub - Machine-Learning-Tokyo/AI_Curriculum: Open Deep Learning and Reinforcement Learning lectures from top Universities like Stanford, MIT, UC Berkeley. GitHub - bentrevett/machine-learning-courses: A collection of machine learning courses. GitHub - Developer-Y/cs-video-courses: List of Computer Science courses with video lectures. GitHub - tigerneil/awesome-deep-rl: For deep RL and the future of AI. GitHub - Developer-Y/math-science-video-lectures: List of Science courses with video lectures GitHub - Machine-Learning-Tokyo/Math_resources GitHub - dair-ai/Mathematics-for-ML: ๐งฎ A collection of resources to learn mathematics for machine learning Foundations of Machine Learning Data Science and Machine Learning Resources โ Jon Krohn https://www.kdnuggets.com/10-github-repositories-to-master-machine-learning GitHub - exajobs/university-courses-collection: A collection of awesome CS courses, assignments, lectures, notes, readings & examinations available online for free. GitHub - prakhar1989/awesome-courses: :books: List of awesome university courses for learning Computer Science! GitHub - owainlewis/awesome-artificial-intelligence: A curated list of Artificial Intelligence (AI) courses, books, video lectures and papers. GitHub - josephmisiti/awesome-machine-learning: A curated list of awesome Machine Learning frameworks, libraries and software. GitHub - academic/awesome-datascience: :memo: An awesome Data Science repository to learn and apply for real world problems. GitHub - ChristosChristofidis/awesome-deep-learning: A curated list of awesome Deep Learning tutorials, projects and communities. GitHub - guillaume-chevalier/Awesome-Deep-Learning-Resources: Rough list of my favorite deep learning resources, useful for revisiting topics or for reference. I have got through all of the content listed there, carefully. - Guillaume Chevalier GitHub - MartinuzziFrancesco/awesome-scientific-machine-learning: A curated list of awesome Scientific Machine Learning (SciML) papers, resources and software GitHub - SE-ML/awesome-seml: A curated list of articles that cover the software engineering best practices for building machine learning applications. GitHub - jtoy/awesome-tensorflow: TensorFlow - A curated list of dedicated resources http://tensorflow.org GitHub - altamiracorp/awesome-xai: Awesome Explainable AI (XAI) and Interpretable ML Papers and Resources GitHub - ujjwalkarn/Machine-Learning-Tutorials: machine learning and deep learning tutorials, articles and other resources GitHub - kiloreux/awesome-robotics: A list of awesome Robotics resources GitHub - jbhuang0604/awesome-computer-vision: A curated list of awesome computer vision resources GitHub - dk-liang/Awesome-Visual-Transformer: Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV) GitHub - ChanganVR/awesome-embodied-vision: Reading list for research topics in embodied vision GitHub - EthicalML/awesome-production-machine-learning: A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning GitHub - wangyongjie-ntu/Awesome-explainable-AI: A collection of research materials on explainable AI/ML GitHub - jphall663/awesome-machine-learning-interpretability: A curated list of awesome responsible machine learning resources. GitHub - JShollaj/awesome-llm-interpretability: A curated list of Large Language Model (LLM) Interpretability resources. GitHub - MinghuiChen43/awesome-deep-phenomena: A curated list of papers of interesting empirical study and insight on deep learning. Continually updating... GitHub - Nikasa1889/awesome-deep-learning-theory: A curated list of awesome Deep Learning theories that shed light on the mysteries of DL [2106.10165] The Principles of Deep Learning Theory GitHub - awesomedata/awesome-public-datasets: A topic-centric list of HQ open datasets. GitHub - jsbroks/awesome-dataset-tools: ๐ง A curated list of awesome dataset tools GitHub - mint-lab/awesome-robotics-datasets: A collection of useful datasets for robotics and computer vision GitHub - kelvins/awesome-mlops: :sunglasses: A curated list of awesome MLOps tools GitHub - Bisonai/awesome-edge-machine-learning: A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
Before you continue to YouTube Before you continue to YouTube
Contents¶
Crossovers¶
Deep dives¶
Brain storming¶
Additional resources¶
Related¶
Explanation by AI¶
Landscapes by AI¶
- Machine Learning Algorithms โ โโ Supervised Learning โ โโ Classification โ โ โโ Generalized Linear Models โ โ โ โโ [[Logistic Regression]] โ โ โ โโ Probit Regression โ โ โ โโ Multinomial Logistic Regression โ โ โโ [[Naive Bayes]] โ โ โ โโ Gaussian Naive Bayes โ โ โ โโ Multinomial Naive Bayes โ โ โ โโ Bernoulli Naive Bayes โ โ โ โโ Complement Naive Bayes โ โ โโ [[Decision Trees]] โ โ โ โโ ID3 โ โ โ โโ C4.5 โ โ โ โโ CART โ โ โ โโ CHAID โ โ โ โโ Conditional Inference Trees โ โ โโ Rule-Based Classifiers โ โ โ โโ OneR โ โ โ โโ RIPPER โ โ โ โโ PART โ โ โโ Ensemble Methods โ โ โ โโ Bagging โ โ โ โ โโ [[Random Forest]] โ โ โ โ โโ Extra Trees โ โ โ โ โโ Bagged Decision Trees โ โ โ โโ [[Boosting]] โ โ โ โ โโ AdaBoost โ โ โ โ โโ Gradient Boosting โ โ โ โ โ โโ XGBoost โ โ โ โ โ โโ LightGBM โ โ โ โ โ โโ CatBoost โ โ โ โ โโ LogitBoost โ โ โ โโ Stacking โ โ โ โโ Voting โ โ โ โโ Cascading โ โ โโ [[Support Vector Machines]] (SVM) โ โ โ โโ Linear SVM โ โ โ โโ Kernel SVM โ โ โ โ โโ Polynomial Kernel โ โ โ โ โโ RBF Kernel โ โ โ โ โโ Sigmoid Kernel โ โ โ โ โโ Custom Kernels โ โ โ โโ One-Class SVM โ โ โ โโ Multiclass SVM โ โ โ โโ One-vs-One โ โ โ โโ One-vs-Rest โ โ โโ K-Nearest Neighbors (KNN) โ โ โ โโ Brute Force KNN โ โ โ โโ KD-Trees โ โ โ โโ Ball Trees โ โ โ โโ Locality Sensitive Hashing (LSH) โ โ โโ Discriminant Analysis โ โ โ โโ Linear Discriminant Analysis (LDA) โ โ โ โโ Quadratic Discriminant Analysis (QDA) โ โ โ โโ Regularized Discriminant Analysis (RDA) โ โ โโ Artificial neural network โ โ โ โโ Multi-Layer Perceptron (MLP) โ โ โ โโ [[Convolutional Neural Network]] (CNN) โ โ โ โโ Capsule Networks โ โ โ โโ [[Spiking Neural Network]] (SNN) โ โ โโ Other Classifiers โ โ โโ Bayesian Networks โ โ โโ Gaussian Processes โ โ โโ Relevance Vector Machines (RVM) โ โ โ โโ Regression โ โโ Linear Models โ โ โโ [[Linear Regression]] โ โ โโ [[Polynomial Regression]] โ โ โโ Stepwise Regression โ โ โโ LASSO (Least Absolute Shrinkage and Selection Operator) โ โ โโ Ridge Regression โ โ โโ Elastic Net โ โ โโ Least-Angle Regression (LARS) โ โโ Regularization Methods โ โ โโ L1 Regularization (LASSO) โ โ โโ L2 Regularization (Ridge) โ โ โโ L1/L2 Regularization (Elastic Net) โ โโ Decision Trees โ โ โโ Regression Trees โ โ โโ Model Trees โ โโ [[Ensemble Methods]] โ โ โโ Random Forest โ โ โโ Gradient Boosting (e.g., XGBoost, LightGBM, CatBoost) โ โ โโ AdaBoost โ โ โโ Stacked Generalization (Stacking) โ โโ Support Vector Regression (SVR) โ โ โโ Linear SVR โ โ โโ Non-Linear SVR โ โ โโ Kernels (e.g., RBF, Polynomial) โ โโ Gaussian Process Regression (GPR) โ โโ Isotonic Regression โ โโ Quantile Regression โ โโ Kriging (Spatial Interpolation) โ โโ Neural Networks โ โโ [[Multi-Layer Perceptron]] (MLP) โ โโ [[Recurrent Neural Networks]] (RNN) โ โ โโ L[[ong Short-Term Memory]] (LSTM) โ โ โโ [[Gated Recurrent Unit]] (GRU) โ โโ Convolutional Neural Networks (CNN) โ โโ Unsupervised Learning โ โโ [[Clustering]] โ โ โโ [[Partitioning Methods]] โ โ โ โโ [[K-Means]] โ โ โ โโ K-Medoids (PAM) โ โ โ โโ Fuzzy C-Means โ โ โ โโ Gaussian Mixture Models (GMM) โ โ โ โโ Expectation-Maximization (EM) โ โ โโ [[Hierarchical Clustering]] โ โ โ โโ Agglomerative Clustering โ โ โ โ โโ Single Linkage โ โ โ โ โโ Complete Linkage โ โ โ โ โโ Average Linkage โ โ โ โ โโ Ward's Method โ โ โ โโ Divisive Clustering โ โ โ โโ DIANA โ โ โ โโ DISMEA โ โ โโ Density-Based Clustering โ โ โ โโ DBSCAN โ โ โ โโ OPTICS โ โ โ โโ HDBSCAN โ โ โ โโ DENCLUE โ โ โโ Grid-Based Clustering โ โ โ โโ STING โ โ โ โโ CLIQUE โ โ โ โโ WaveCluster โ โ โโ Model-Based Clustering โ โ โ โโ [[Self-Organizing Maps]] (SOM) โ โ โ โโ Adaptive Resonance Theory (ART) โ โ โ โโ Deep Embedded Clustering (DEC) โ โ โโ Other Clustering Methods โ โ โโ Spectral Clustering โ โ โโ Affinity Propagation โ โ โโ Mean Shift โ โ โโ BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies) โ โ โ โโ [[Dimensionality Reduction]] โ โ โโ Linear Methods โ โ โ โโ Principal Component Analysis (PCA) โ โ โ โโ Singular Value Decomposition (SVD) โ โ โ โโ Non-Negative Matrix Factorization (NMF) โ โ โ โโ Independent Component Analysis (ICA) โ โ โ โโ Factor Analysis โ โ โโ Non-Linear Methods โ โ โ โโ [[t-SNE]] (t-Distributed Stochastic Neighbor Embedding) โ โ โ โโ [[UMAP]] (Uniform Manifold Approximation and Projection) โ โ โ โโ Locally Linear Embedding (LLE) โ โ โ โโ Isomap โ โ โ โโ Laplacian Eigenmaps โ โ โ โโ Diffusion Maps โ โ โ โโ Kernel PCA โ โ โ โโ [[Autoencoder]] โ โ โ โ โโ Vanilla Autoencoder โ โ โ โ โโ Denoising Autoencoder โ โ โ โ โโ [[Sparse Autoencoder]] โ โ โ โ โโ [[Variational Autoencoder]] (VAE) โ โ โ โโ Self-Supervised Learning โ โ โ โโ Contrastive Learning โ โ โ โโ Clustering-Based Methods โ โ โโ [[Manifold Learning]] โ โ โโ Multidimensional Scaling (MDS) โ โ โโ Isomap โ โ โโ Locally Linear Embedding (LLE) โ โ โโ Laplacian Eigenmaps โ โ โโ Hessian Eigenmaps โ โ โโ Local Tangent Space Alignment (LTSA) โ โ โโ Diffusion Maps โ โ โ โโ Association Rule Learning โ โโ Apriori โ โโ FP-Growth โ โโ Eclat โ โโ GUHA (General Unary Hypotheses Automaton) โ โโ Semi-Supervised Learning โ โโ Self-Training โ โโ Co-Training โ โโ Tri-Training โ โโ Transductive SVM โ โโ Graph-Based Methods โ โ โโ Label Propagation โ โ โโ Label Spreading โ โโ Generative Models โ โ โโ Gaussian Mixture Models (GMM) โ โ โโ Variational Autoencoders (VAE) โ โโ Low-Density Separation โ โโ Transductive SVM โ โโ S3VM (Semi-Supervised SVM) โ โโ [[Reinforcement Learning]] โ โโ [[Model-Free Methods]] โ โ โโ Value-Based Methods โ โ โ โโ [[Q-Learning]] โ โ โ โโ SARSA (State-Action-Reward-State-Action) โ โ โ โโ Double Q-Learning โ โ โ โโ Expected SARSA โ โ โ โโ Deep Q-Networks (DQN) โ โ โ โโ Double DQN โ โ โ โโ Dueling DQN โ โ โ โโ Prioritized Experience Replay (PER) โ โ โ โโ Rainbow โ โ โโ Policy-Based Methods โ โ โโ Policy Gradients โ โ โ โโ REINFORCE โ โ โ โโ Advantage Actor-Critic (A2C) โ โ โ โโ Asynchronous Advantage Actor-Critic (A3C) โ โ โ โโ Proximal Policy Optimization (PPO) โ โ โ โโ Trust Region Policy Optimization (TRPO) โ โ โโ Actor-Critic Methods โ โ โ โโ Deterministic Policy Gradient (DPG) โ โ โ โโ Deep Deterministic Policy Gradient (DDPG) โ โ โ โโ Twin Delayed DDPG (TD3) โ โ โ โโ Soft Actor-Critic (SAC) โ โ โโ Entropy-Based Methods โ โ โโ Soft Q-Learning โ โ โโ Soft Actor-Critic (SAC) โ โ โ โโ Model-Based Methods โ โโ[[ Dynamic Programming]] โ โ โโ Value Iteration โ โ โโ Policy Iteration โ โโ [[Monte Carlo Tree Search]] (MCTS) โ โโ [[AlphaZero]] โ โโ World Models โ โโ Model-Based RL with Uncertainty โ โโ Deep Learning (Artificial neural network) โโ [[Feedforward Neural Network]] โ โโ [[Multi-Layer Perceptron]] (MLP) โ โโ Extreme Learning Machines (ELM) โ โโ [[Echo State Network]] (ESN) โ โโ [Liquid State Machine] โ โโ [[Spiking Neural Network]] (SNN) โ โโ [[Autoencoder]] โ โ โโ Vanilla Autoencoder โ โ โโ Denoising Autoencoder โ โ โโ Sparse Autoencoder โ โ โโ Contractive Autoencoder โ โ โโ [[Variational Autoencoder]] (VAE) โ โ โโ Adversarial Autoencoder (AAE) โ โโ Deep Belief Networks (DBN) โ โโ Convolutional Neural Networks (CNN) โ โโ LeNet โ โโ AlexNet โ โโ VGGNet โ โโ GoogLeNet (Inception) โ โโ ResNet โ โโ DenseNet โ โโ MobileNet โ โโ EfficientNet โ โโ Vision Transformers (ViT) โ โโ Spatial Transformer Networks (STN) โ โโ Deformable Convolutional Networks (DCN) โ โโ Capsule Networks โ โโ Attention-Based CNNs โ โโ Recurrent Neural Networks (RNN) โ โโ Simple RNN โ โโ Long Short-Term Memory (LSTM) โ โโ Gated Recurrent Unit (GRU) โ โโ Bidirectional RNN โ โโ Attention Mechanisms โ โ โโ Seq2Seq with Attention โ โ โโ Transformer โ โ โ โโ BERT (Bidirectional Encoder Representations from Transformers) โ โ โ โโ GPT (Generative Pre-trained Transformer) โ โ โ โโ T5 (Text-to-Text Transfer Transformer) โ โ โ โโ XLNet โ โ โ โโ RoBERTa โ โ โ โโ ALBERT โ โ โ โโ ELECTRA โ โ โ โโ Reformer โ โ โโ Pointer Networks โ โโ Memory Networks โ โโ [Neural Turing Machine] โ โโ [[Differentiable Neural Computer]] (DNC) โ โโ Generative Models โ โโ [[Generative Adversarial Network]] (GAN) โ โ โโ DCGAN (Deep Convolutional GAN) โ โ โโ WGAN (Wasserstein GAN) โ โ โโ CGAN (Conditional GAN) โ โ โโ InfoGAN โ โ โโ Pix2Pix โ โ โโ CycleGAN โ โ โโ StarGAN โ โ โโ Progressive Growing of GANs (PGGAN) โ โ โโ BigGAN โ โ โโ StyleGAN โ โ โโ Self-Attention GAN (SAGAN) โ โโ Variational Autoencoders (VAE) โ โ โโ Conditional VAE (CVAE) โ โ โโ Ladder VAE โ โ โโ VQ-VAE (Vector Quantized VAE) โ โ โโ Disentangled VAE (ฮฒ-VAE, FactorVAE) โ โโ Flow-Based Models โ โ โโ Normalizing Flows โ โ โโ RealNVP โ โ โโ Glow โ โ โโ Masked Autoregressive Flow (MAF) โ โโ Energy-Based Models (EBM) โ โโ Autoregressive Models โ โโ PixelRNN โ โโ PixelCNN โ โโ WaveNet โ โโ Transformer-Based Models (e.g., GPT, CTRL) โ โโ [[Graph Neural Network]] (GNN) โ โโ [[Graph Convolutional Network]] (GCN) โ โโ GraphSAGE โ โโ [[Graph Attention Network]] (GAT) โ โโ Graph Isomorphism Network (GIN) โ โโ Gated Graph Neural Networks (GGNN) โ โโ Graph Recurrent Networks (GRN) โ โโ Graph Autoencoders (GAE) โ โโ Graph Generative Models โ โโ [[Deep Reinforcement Learning]] โโ [[Deep Q-Networks]] (DQN) โโ [[Policy Gradient Methods]] โ โโ TRPO (Trust Region Policy Optimization) โ โโ PPO (Proximal Policy Optimization) โ โโ DDPG (Deep Deterministic Policy Gradient) โโ Actor-Critic Methods โ โโ A2C (Advantage Actor-Critic) โ โโ A3C (Asynchronous Advantage Actor-Critic) โ โโ ACER (Actor-Critic with Experience Replay) โโ [[Distributional reinforcement learning]] โ โโ C51 โ โโ QR-DQN (Quantile Regression DQN) โโ [[Hierarchical reinforcement learning]] โ โโ Feudal Networks โ โโ Option-Critic โ โโ MAXQ โโ Inverse Reinforcement Learning (IRL) โโ Maximum Entropy IRL โโ Generative Adversarial Imitation Learning (GAIL) โโ Adversarial Inverse Reinforcement Learning (AIRL)
- Working with machine learning algorithms
- Data Preprocessing:
- Use NumPy and Pandas for data manipulation and preprocessing.
-
Scikit-learn provides various tools for data preprocessing, such as scaling, normalization, and encoding categorical variables.
-
Supervised Learning:
- Scikit-learn offers implementations of many classic algorithms like linear regression, logistic regression, decision trees, SVMs, and naive Bayes.
- For neural networks, you can use libraries like TensorFlow or PyTorch.
-
XGBoost, LightGBM, and CatBoost are popular libraries for gradient boosting.
-
Unsupervised Learning:
- Scikit-learn provides implementations of clustering algorithms like K-means, DBSCAN, and hierarchical clustering.
- For dimensionality reduction, you can use PCA, t-SNE, and UMAP from Scikit-learn.
-
Neural network-based techniques like autoencoders and GANs can be implemented using TensorFlow or PyTorch.
-
Semi-Supervised Learning:
- Scikit-learn offers a few semi-supervised learning algorithms, such as label propagation and label spreading.
-
For more advanced techniques, you may need to implement them from scratch or look for specialized libraries.
-
Reinforcement Learning:
- OpenAI Gym is a popular toolkit for developing and comparing reinforcement learning algorithms.
- Stable Baselines and RLlib are libraries that provide implementations of various RL algorithms.
-
For deep reinforcement learning, you can use libraries like TensorFlow or PyTorch in combination with OpenAI Gym.
-
Deep Learning:
- TensorFlow and PyTorch are the most widely used libraries for building and training deep neural networks.
- Keras is a high-level neural networks API that can run on top of TensorFlow, CNTK, or Theano.
- For specific architectures like CNNs, RNNs, and Transformers, these libraries offer pre-built layers and modules.
Deep dives by AI¶
AI¶
- Machine learning, a branch of artificial intelligence, focuses on the development of algorithms and statistical models that enable computers to perform tasks without explicit instructions, relying on patterns and inference instead. It encompasses a wide range of approaches and methodologies. Here's a comprehensive list of various branches and subfields within machine learning: Here is a one-sentence explanation for each entry in your comprehensive list of machine learning branches and subfields:
1. Supervised Learning¶
- Regression (Linear, Polynomial, Logistic): Involves predicting a continuous output variable based on one or more input features.
- Classification (Decision Trees, Support Vector Machines, k-Nearest Neighbors): Focuses on categorizing data into predefined classes or groups.
- Ensemble Methods (Random Forests, Boosting, Bagging): Combines multiple models to improve prediction accuracy or classification performance.
- Neural Networks and Deep Learning: Complex structures modeled after the human brain that can learn from large amounts of data.
- Bayesian Networks: Uses probabilistic models for a set of variables and their conditional dependencies.
2. Unsupervised Learning¶
- Clustering (k-Means, Hierarchical, DBSCAN): Groups similar data points together without predefined labels.
- Dimensionality Reduction (PCA, t-SNE, LDA): Reduces the number of random variables to consider, simplifying the dataset while retaining important information.
- Association Rule Learning (Apriori, Eclat): Discovers interesting relations between variables in large databases.
- Anomaly Detection: Identifies unusual patterns that do not conform to expected behavior.
- Autoencoders: Neural networks used for unsupervised learning of efficient codings.
3. Semi-Supervised Learning¶
- Self-Training Models: Use their own predictions to incrementally train on unlabeled data.
- Co-Training Approaches: Train multiple learners on different views of the data and combine their predictions.
- Label Propagation: Spreads labels through the dataset based on similarity and distance metrics.
- Generative Models: Learns to generate new data samples that resemble the given training data.
4. Reinforcement Learning¶
- Q-Learning: A value-based method for finding the optimal action-selection policy.
- Temporal Difference Methods: Learn directly from raw experience without a model of the environmentโs dynamics.
- Deep Reinforcement Learning: Combines deep neural networks with reinforcement learning.
- Policy Optimization: Focuses on finding the best policy directly, rather than evaluating a given policy.
- Multi-Armed Bandit Algorithms: Solves problems where you have to choose between multiple options with uncertain outcomes.
- Monte Carlo Tree Search: A heuristic search algorithm for decision-making processes, particularly in game playing.
5. Deep Learning¶
- Convolutional Neural Networks (CNNs): Specialized for processing data with a grid-like topology, such as images.
- Recurrent Neural Networks (RNNs): Designed for processing sequential data, such as time series or natural language.
- Long Short-Term Memory Networks (LSTMs): An advanced type of RNN capable of learning long-term dependencies.
- Generative Adversarial Networks (GANs): Consists of two neural networks contesting with each other to generate new, synthetic instances of data.
- Transformer Models: Utilizes attention mechanisms to significantly improve the quality of results in NLP tasks.
- Deep Reinforcement Learning: Integrates deep learning and reinforcement learning principles for complex problem-solving.
- Autoencoders and Variational Autoencoders: Used for learning efficient codings of input data.
6. Natural Language Processing (NLP)¶
- Text Classification: Assigns categories or labels to text based on its content.
- Sentiment Analysis: Identifies and categorizes opinions expressed in text to determine the writer's attitude.
- Machine Translation: Automatically translates text or speech from one language to another.
- Speech Recognition: Converts spoken language into text.
- Language Generation: Creates meaningful phrases, sentences, or entire articles.
- Named Entity Recognition: Identifies and classifies key information (entities) in text.
- Topic Modeling: Discovers abstract topics within a collection of documents.
7. Computer Vision¶
- Image Classification: Assigns a label to an entire image or photograph.
- Object Detection: Identifies and locates objects within an image.
- Image Segmentation: Divides a digital image into multiple segments to simplify its representation.
- Face Recognition: Identifies or verifies a person from a digital image or a video frame.
- Optical Character Recognition: Converts different types of documents into editable and searchable data.
- Image Generation: Creates new images, often from a given set of conditions or attributes.
8. Predictive Analytics¶
- Time Series Analysis: Analyzes time-ordered sequence data to extract meaningful statistics and characteristics.
- Forecasting Models: Predicts future values based on previously observed values
. - Survival Analysis: Analyzes and predicts the time until an event of interest occurs. - Anomaly Detection in Time Series: Identifies unusual patterns in time-ordered data that do not conform to expected behavior.
9. Recommender Systems¶
- Content-Based Filtering: Recommends items similar to those a user likes, based on their previous actions or explicit feedback.
- Collaborative Filtering: Makes automatic predictions about user interests by collecting preferences from many users.
- Hybrid Recommender Systems: Combines content-based and collaborative filtering methods to improve recommendation accuracy.
10. Bayesian Learning¶
- Bayesian Networks: Graphical models that represent probabilistic relationships among variables.
- Gaussian Processes: A flexible approach to regression problems.
- Markov Chain Monte Carlo (MCMC) Methods: A class of algorithms for sampling from probability distributions.
- Naive Bayes Classifiers: A simple probabilistic classifier based on Bayes' theorem with strong independence assumptions.
11. Evolutionary Algorithms¶
- Genetic Algorithms: Mimics the process of natural selection to solve optimization and search problems.
- Evolutionary Strategies: Uses mechanisms inspired by biological evolution, such as reproduction, mutation, recombination, and selection.
- Genetic Programming: Evolves computer programs to perform a specific task.
12. Feature Engineering and Selection¶
- Feature Extraction: Reduces the number of resources required to describe a large set of data accurately.
- Feature Importance: Identifies which features are most relevant to the outcome of a particular predictive model.
- Regularization Techniques (L1, L2): Methods used to prevent overfitting by penalizing large coefficients in the model.
13. Interpretability and Explainability¶
- Model Interpretation Methods: Techniques that make the outputs of machine learning models understandable to humans.
- Explainable AI (XAI) Techniques: Strives to make the results of AI and machine learning algorithms transparent and understandable.
- Feature Importance Analysis: Identifies and ranks the importance of different inputs to a model.
14. Ensemble Methods¶
- Boosting (AdaBoost, Gradient Boosting): Combines weak learners to create a strong learner in a sequential manner.
- Bagging (Random Forest): Uses bootstrapping to create an ensemble of models and then averages their predictions.
- Stacking: Combines multiple classification or regression models via a meta-classifier or a meta-regressor.
15. Transfer Learning¶
- Domain Adaptation: Adapts a model trained in one domain to be effective in a different domain.
- Fine-Tuning Pretrained Models: Adjusts a pre-existing model to make it perform better in a specific task.
- Multi-Task Learning: Improves learning efficiency and prediction accuracy for one task by using the knowledge gained while solving related tasks.
16. Distributed and Parallel Machine Learning¶
- Big Data Analytics: Processes large volumes of data to extract useful information and insights.
- Scalable Machine Learning Algorithms: Designed to handle increasing amounts of data or computation efficiently.
- Cloud-Based Machine Learning: Utilizes cloud computing resources to build, train, and deploy machine learning models.
17. Optimization Techniques in Machine Learning¶
- Gradient Descent and Variants: An iterative optimization algorithm used to minimize a function by moving in the direction of steepest descent.
- Stochastic Optimization: Optimization methods that use randomness as part of the solution process.
- Convex Optimization: A subfield of optimization that studies the problem of minimizing convex functions over convex sets.
18. Anomaly and Outlier Detection¶
- Statistical Methods for Anomaly Detection: Identifies anomalies based on statistical models.
- Isolation Forest: An algorithm to detect outliers that isolates anomalies instead of profiling normal data points.
- One-Class SVM: A variant of SVM that is used for anomaly detection in an unsupervised manner.
19. Audio and Speech Processing¶
- Speech Recognition: Transforms spoken language into text by computers.
- Music Classification: Categorizes music into genres, moods, or other attributes using machine learning.
- Sound Generation: Creates synthetic sounds or music.
20. Robotics and Control Systems¶
- Machine Learning in Robotics: Applies machine learning techniques to enable robots to learn from and adapt to their environment.
- Control Systems Using Reinforcement Learning: Uses reinforcement learning to optimize the performance of control systems.
Machine learning is a rapidly evolving field, continually incorporating new algorithms, techniques, and applications. Its versatility allows it to be applied across various domains, including finance, healthcare, education, transportation, and more, making it a pivotal technology in the modern world.
Additional metadata¶
-
processed #processing #toprocess #important #short #long #casual #focus¶
- Unfinished: #metadata #tags