Discover Agent Skills for data science & ml. Browse 61 skills for Claude, ChatGPT & Codex.
Manages LocalAI services via Podman to provide OpenAI-compatible local model inference with full GPU acceleration.
Provides AI-ready datasets and benchmarks for drug discovery, including ADME, toxicity, and molecular generation tasks.
Streamlines computational molecular biology tasks including sequence analysis, biological file parsing, and genomic database integration.
Integrates external variables like holidays and weather data into TimeGPT models to significantly improve time series forecasting accuracy.
Automates complex biomedical research tasks including genomics analysis, drug discovery, and CRISPR screening using integrated data and code execution.
Manages local LLM inference using Ollama and Podman Quadlet with full GPU acceleration support.
Orchestrates end-to-end MLOps pipelines from data preparation and model training to production deployment and monitoring.
Accesses and analyzes comprehensive pharmaceutical data from DrugBank for drug discovery, interaction analysis, and pharmacological research.
Manages multi-instance JupyterLab environments with hardware-accelerated GPU support via Podman Quadlet.
Facilitates the use of local Ollama models with the official OpenAI Python library and compatible AI orchestration frameworks.
Enforces E8 architecture standards and QIG purity protocols within the Pantheon development ecosystem.
Executes autonomous biomedical research tasks across genomics, drug discovery, and molecular biology using integrated databases and code execution.
Build robust Retrieval-Augmented Generation (RAG) systems using vector databases and semantic search to ground AI responses in external data.
Manages FiftyOne dataset visualization and curation environments using Podman Quadlet containers with integrated MongoDB sidecars.
Manages ComfyUI instances for node-based Stable Diffusion image generation with automated GPU configuration and model management.
Builds end-to-end MLOps pipelines covering data preparation, model training, validation, and production deployment.
Optimizes vector database performance by balancing search latency, recall accuracy, and memory footprint.
Guides the development of high-performance ML and AI applications in Rust using memory-efficient patterns and GPU acceleration.
Optimizes LLM fine-tuning via advanced QLoRA patterns, hyperparameter tuning, and memory-efficient implementation strategies.
Builds high-quality fine-tuning datasets from literary works to train AI models in specific authorial voices and writing styles.
Diagnoses and mitigates AI agent performance failures caused by long-context attention loss, poisoning, and informational clash.
Designs and implements sophisticated multi-agent systems using supervisor, swarm, and hierarchical patterns to solve complex context management challenges.
Provides a clean, Pythonic interface for local LLM inference, chat completions, and model management using the official Ollama library.
Provides foundational expertise in context engineering to optimize AI agent performance and manage token usage effectively.
Implements Group Relative Policy Optimization for efficient LLM alignment and reinforcement learning from human feedback.
Streamlines parameter-efficient fine-tuning for LLMs using LoRA, QLoRA, and Unsloth to optimize memory and training speed.
Optimizes Large Language Models using Direct Preference Optimization to align behavior with preferred response pairs without explicit reward modeling.
Fine-tunes vision-language models like Pixtral and Ministral using Unsloth's FastVisionModel optimizations for faster training.
Transforms external RDF context into formal Belief-Desire-Intention (BDI) models to enable rational agency and explainable reasoning in AI agents.
Optimizes large language models for efficient inference and training by reducing memory footprint using advanced precision-shifting techniques like 4-bit and 8-bit quantization.
Scroll for more results...