Discover Agent Skills for data science & ml. Browse 61 skills for Claude, ChatGPT & Codex.
Mitigates cognitive bias in decision-making by prioritizing statistical base rates over vivid, anecdotal evidence.
Guides the selection of optimal machine learning algorithms by analyzing problem structure, data properties, and production constraints.
Performs systematic qualitative thematic analysis on document collections to extract deep structural insights and categorized themes.
Transcribes audio and video files locally using the OpenAI Whisper CLI without the need for an API key.
Conducts autonomous, institutional-grade financial analysis using multi-guru perspectives and advanced composite scoring.
Build and deploy production-ready multi-agent systems with MCP integration and automated workflows.
Provides comprehensive frameworks and best practices for adapting foundation models to specialized domains using full fine-tuning and parameter-efficient methods.
Automates systematic literature screening using the PRISMA 2020 framework and cost-effective Groq LLMs.
Prevents false positive pattern recognition in data and visual analysis by distinguishing genuine signals from cognitive illusions.
Identifies and mitigates the tendency to see meaningful patterns in random data streaks or clusters.
Enables Claude to identify and mitigate logical errors caused by focusing on visible successes while ignoring hidden failures.
Enhances decision-making by identifying statistical reversion in performance data and preventing false causal interpretations.
Simulates complex systems from the bottom-up by defining simple rules for individual agents to observe emergent collective patterns.
Programmatically creates, edits, and optimizes Jupyter and Google Colab notebooks with precise JSON formatting and metadata management.
Implements a decoupled architecture for pre-computing machine learning predictions at scheduled intervals to optimize costs and serving latency.
Combats decision-making bias by anchoring probability assessments on statistical baseline frequencies before incorporating specific case details.
Performs hypothesis-driven statistical analysis and data visualization on datasets, system metrics, and experiment logs.
Standardizes the integration of external machine learning libraries and custom neural network modules within the Haipipe architecture.
Manages a robust four-stage pipeline that converts modular Python scripts into interactive Jupyter notebooks and comprehensive markdown documentation.
Standardizes raw academic and medical data files into structured SourceSet DataFrames for research pipelines.
Identifies long-term societal and structural shifts through bottom-up pattern detection and massive data aggregation.
Orchestrates model lifecycles and provides HuggingFace-style APIs for modular neural network research pipelines.
Provides a foundational architecture map and decision guide for managing neural network pipelines within the HAIPipe research framework.
Standardizes machine learning algorithm implementation through a universal wrapper contract for seamless training, inference, and serialization.
Transforms raw source datasets into temporally-aligned structured record sets for academic research and machine learning.
Builds, trains, and deploys predictive machine learning models with robust preprocessing and standardized evaluation pipelines.
Transforms raw data into optimized features to improve machine learning model performance and predictive accuracy.
Creates sophisticated, interactive data visualizations and custom charts using the D3.js library.
Implements real-time machine learning architectures for processing unbounded data streams with sub-100ms prediction latency.
Validates hypotheses and scientific theories by ensuring they are testable and capable of being proven false through rigorous experimentation.
Scroll for more results...