Data science and machine learning platform functions for the East language (TypeScript types). Use when writing East programs that need optimization (MADS, Optuna, SimAnneal, Scipy), machine learning (XGBoost, LightGBM, NGBoost, Torch MLP, GP), ML utilities (Sklearn preprocessing, metrics, splits), or model explainability (SHAP). Triggers for: (1) Writing East programs with @elaraai/east-py-datascience, (2) Derivative-free optimization with MADS, (3) Bayesian optimization with Optuna, (4) Discrete/combinatorial optimization with SimAnneal, (5) Gradient boosting with XGBoost or LightGBM, (6) Probabilistic predictions with NGBoost or GP, (7) Neural networks with Torch MLP, (8) Data preprocessing and metrics with Sklearn, (9) Model explainability with Shap.
/plugin marketplace add elaraai/east-plugin/plugin install east@elaraaiThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/api.mdreferences/examples.mdData science and machine learning platform functions for the East language. Provides optimization, ML models, preprocessing, and explainability.
import { East, FloatType, variant } from "@elaraai/east";
import { MADS } from "@elaraai/east-py-datascience";
// Define objective function
const objective = East.function([MADS.Types.VectorType], FloatType, ($, x) => {
const x0 = $.let(x.get(0n));
const x1 = $.let(x.get(1n));
return $.return(x0.multiply(x0).add(x1.multiply(x1)));
});
// Optimize
const optimize = East.function([], MADS.Types.ResultType, $ => {
const x0 = $.let([0.5, 0.5]);
const bounds = $.let({ lower: [-1.0, -1.0], upper: [1.0, 1.0] });
const config = $.let({
max_bb_eval: variant('some', 100n),
display_degree: variant('some', 0n),
direction_type: variant('none', null),
initial_mesh_size: variant('none', null),
min_mesh_size: variant('none', null),
seed: variant('some', 42n),
});
return $.return(MADS.optimize(objective, x0, bounds, variant('none', null), config));
});
Task → What do you need?
│
├─ MADS (derivative-free continuous optimization)
│ └─ .optimize()
│
├─ Optuna (Bayesian hyperparameter tuning)
│ └─ .optimize()
│
├─ SimAnneal (discrete/combinatorial optimization)
│ └─ .optimize(), .optimizePermutation(), .optimizeSubset()
│
├─ Scipy
│ ├─ Optimization → .optimizeMinimize(), .optimizeMinimizeQuadratic(), .optimizeDualAnnealing()
│ ├─ Statistics → .statsDescribe(), .statsPearsonr(), .statsSpearmanr(), .statsPercentile(), .statsIqr(), .statsMedian(), .statsMad(), .statsRobust()
│ ├─ Curve Fitting → .curveFit()
│ └─ Interpolation → .interpolate1dFit(), .interpolate1dPredict()
│
├─ XGBoost (gradient boosting)
│ ├─ Train → .trainRegressor(), .trainClassifier(), .trainQuantile()
│ └─ Predict → .predict(), .predictClass(), .predictProba(), .predictQuantile()
│
├─ LightGBM (fast gradient boosting)
│ ├─ Train → .trainRegressor(), .trainClassifier()
│ └─ Predict → .predict(), .predictClass(), .predictProba()
│
├─ NGBoost (probabilistic gradient boosting)
│ ├─ Train → .trainRegressor()
│ └─ Predict → .predict(), .predictDist()
│
├─ Torch (neural networks)
│ ├─ Train → .mlpTrain(), .mlpTrainMulti()
│ ├─ Predict → .mlpPredict(), .mlpPredictMulti()
│ └─ Embeddings → .mlpEncode(), .mlpDecode()
│
├─ GP (Gaussian Process regression)
│ ├─ Train → .train()
│ └─ Predict → .predict(), .predictStd()
│
├─ Sklearn (preprocessing & metrics)
│ ├─ Splitting → .trainTestSplit(), .trainValTestSplit()
│ ├─ Scaling → .standardScalerFit(), .standardScalerTransform(), .minMaxScalerFit(), .minMaxScalerTransform()
│ ├─ Metrics → .computeMetrics(), .computeMetricsMulti(), .computeClassificationMetrics(), .computeClassificationMetricsMulti()
│ └─ Multi-target → .regressorChainTrain(), .regressorChainPredict()
│
└─ Shap (model explainability)
├─ Create → .treeExplainerCreate(), .kernelExplainerCreate()
└─ Compute → .computeValues(), .featureImportance()
| Type | Definition | Description |
|---|---|---|
VectorType | ArrayType(FloatType) | 1D array of floats (e.g., [1.0, 2.0, 3.0]) |
MatrixType | ArrayType(ArrayType(FloatType)) | 2D array of floats (e.g., [[1.0, 2.0], [3.0, 4.0]]) |
LabelVectorType | ArrayType(IntegerType) | Class labels as integers (e.g., [0n, 1n, 0n, 2n]) |
ModelBlobType | BlobType | Serialized model (opaque, pass to predict functions) |
| Module | Import | Purpose |
|---|---|---|
| MADS | import { MADS } from "@elaraai/east-py-datascience" | Derivative-free blackbox optimization |
| Optuna | import { Optuna } from "@elaraai/east-py-datascience" | Bayesian optimization (hyperparameter tuning) |
| SimAnneal | import { SimAnneal } from "@elaraai/east-py-datascience" | Simulated annealing (permutation/subset) |
| Scipy | import { Scipy } from "@elaraai/east-py-datascience" | Statistics, optimization, interpolation |
| XGBoost | import { XGBoost } from "@elaraai/east-py-datascience" | Gradient boosting (regression/classification/quantile) |
| LightGBM | import { LightGBM } from "@elaraai/east-py-datascience" | Fast gradient boosting |
| NGBoost | import { NGBoost } from "@elaraai/east-py-datascience" | Probabilistic gradient boosting |
| Torch | import { Torch } from "@elaraai/east-py-datascience" | Neural networks (MLP) |
| GP | import { GP } from "@elaraai/east-py-datascience" | Gaussian Process regression |
| Sklearn | import { Sklearn } from "@elaraai/east-py-datascience" | Preprocessing, metrics, data splitting |
| Shap | import { Shap } from "@elaraai/east-py-datascience" | Model explainability (SHAP values) |
import { MADS, Optuna, Sklearn, XGBoost } from "@elaraai/east-py-datascience";
// Access types via Module.Types.TypeName
MADS.Types.VectorType // ArrayType(FloatType)
MADS.Types.BoundsType // StructType({ lower, upper })
MADS.Types.ResultType // StructType({ x_best, f_best, ... })
Optuna.Types.ParamSpaceType // Parameter definition
Optuna.Types.StudyResultType // Optimization result
Sklearn.Types.SplitConfigType // Train/test split config
XGBoost.Types.ModelBlobType // Trained model
// 1. Prepare data
const X = $.let([[...], [...], ...]);
const y = $.let([...]);
// 2. Configure and train
const config = $.let({ /* options with variant('some', value) or variant('none', null) */ });
const model = $.let(Module.train(X, y, config));
// 3. Predict
const predictions = $.let(Module.predict(model, X_test));
// 1. Define objective function
const objective = East.function([VectorType], FloatType, ($, x) => {
// compute and return objective value
});
// 2. Set bounds and config
const bounds = $.let({ lower: [...], upper: [...] });
const config = $.let({ /* options */ });
// 3. Optimize
const result = $.let(Module.optimize(objective, x0, bounds, config));
// result.x_best, result.f_best
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.