Scikit-Learn
-
Base class
- estimator
- classifier
- cluster
- regressor
- transformer
-
Datasets
- datasets.load_svmlight_files(files, n_features, dtype)
-
Cluster
- cluster.KMeans(n_cluster, max_iter, n_init, init:{kmeans++,random}).fit(X).predict(X)
- cluster.DBSCAN(eps, min_scample, m etric, algorithm: {auto, ball_tree, kd_tree}).fit(X).predict(X)
-
Matrix Decomposition
- decomposition.NMF(n_components, init_method, solver: {'pg', 'cd'}, tolerance, max_iter, alpha, l1_ratio).fit(X).transform(X)
-
Ensemble
- ensemble.GradientBoostingClassifiler(loss: {logloss, expo}, learning_rate, n_trees, max_depth, criterion: {mse, mae}, min_split_samples, min_leaf_samples, min_leaf_weight, subsample, max_features, max_leaf_nodes).fit(X,y).predict(X)
- ensemble.GradientBoostingRegressor()
- ensemble.RandomForestClassifier(n_trees, criterion: {gini, entropy}, max_features, max_depth, min_split_samples, min_leaf_samples, max_leaf_nodes).fit(X,y).predict(X)
- ensemble.RandomForestRegressor()
-
Generalized Linear Model
- linear_model.LinearRegression(fit_intercept, normalize).fit(X,y).predict(X,y)
- linear_model.LogisticRegression(penalty: {l1, l2}, fit_intercept, max_iter, solver: {newton, lbfgs, liblinear, sag}, tolerance).fit(X,y),predict(X)
- linear_model.lasso
- linear_model.SGDClassifiler(loss: {hinge, log, squared_loss}, penalty, alpha, l1_ratio, fit_intercept, max_iter, shuffle, learning_rate: {constant, optimal, invscaling}, eta0, power_t).fit(X,y),predict(X)
-
Metrics
- metrics.accuracy_score(y_true,y_pred)
- metrics.auc
- metrics.f1_score
- metrics.hinge_loss
- metrics.log_loss
- metrics.precision_recall_curve
- metrics.roc
- metrics.mean_absolute_error
- metrics.mean_squared_error
-
Pipeline
- pipeline.Pipeline(steps)
-
Preprocessing
- preprocessing.MaxAbsScaler
- preprocessing.Normalizer
- preprocessing.OneHotEncoder
- Support Vector Machine
Keras
- Input
- Dense
- Model(input, output).compile(optimizer,loss,metric).fit().evaluate().predict()
- Optimizer
- Loss
- Metric
PyTorch
- Tensor
- Storage
- Optim